Machine Learning: It is said that human beings can manipulate objects without thinking. I am currently typing without thinking about every key I press and took a sip of tea without misjudging the weight and spilling it. These things seem simple. But when it comes to replicating this behaviour in machines, there is a very different story.
Picking up objects without thought requires precise movements. We, as humans, can do this because of nerve endings in our skin. This provides tactile feedback to the brain about all the dimensions of the thing being grasped. This then becomes instinctive.
However, for a machine, these sorts of activity are hard. For example, picking up a mug of tea like I just did. The steps for a machine to do this task is:
- Identify the object through a camera
- Use pre-programmed grasping strategy specific to what it thought it saw
- Pick up the object
However, computer scientists at the Massachusetts Institute of Technology think that they have found a way for technology to better reflect how people manipulate objects.
In a piece for Nature, the team present their creation of learning the signatures of the human grasp using a scalable tactile glove. Although I did not fully understand the science behind it, the general idea is that within the glove there are pieces of film which generates electricity in response to pressure. There is also a localised pressure sensor.
The experiment involved asking people to use the glove to use everyday objects. This included a pen, scissors, a spoon and a mug. A computer then recorded the signals from the thread.
A machine-learning programme was set to interpret these signals. This was through a visual representation. The areas of low and high pressure were represented by colour. Once the computer recognised this, it was able to identify any of the 26 everyday objects from the pressure interpretation from the glove.
What was the point of the experiment?
Overall, this tests how a human hand exerts force. This will help programme robots when training them how to complete human actions. For example, the group believe this could assist the designers of prosthetic limbs implement machine learning.
Accessibility
Whenever I read about emerging technologies my mind immediately goes to the expense and accessibility of the items. However, these gloves only cost $10 to make. The team want to encourage its production as understanding completely how people use their hands will require enormous amounts of data. However, given the price of the device, this could be likely to work.
The example used of something which is used currently is computer vision. As most people have access to a camera, outputs are easy to share, label and process by computers. Although in this case it is likely that people have been paid to use or wear the gloves, in comparison to the seemingly endless supply of photographs, this is still relatively inexpensive.
Why is this important?
In Serviceteam IT’s 2018 research, we found that 24% of respondents listed a skills shortage as having the greatest impact on their business in the next 3 years. There have been many steps forward in terms of apprenticeship schemes and trying to increase diversity in tech. However, alongside this is a fear that these types of machines, programmed to act like humans (with less human error) will take human jobs.
However, there is more than one use for this type of technology. For example, the team has stressed the importance of prosthetic limbs and improving the lives of humans. Additionally, I have previously researched the concerns over the industrial revolution and the fear of this taking human jobs. However, instead of taking jobs, it created new and arguably more jobs. This was a different type of work, but was work, nonetheless.
Dependent on your stance on introducing machines, the development is inevitable. However, in my opinion, given the inevitability, making machines the most effective they can be can only be a positive.
Leave a Reply
Want to join the discussion?Feel free to contribute!