When I first started hearing about Virtual Reality (VR), my first reaction was “oh great — more toys for the gaming boys”. Which was a foolish dismissal of something that can bring great utility to some unexpected areas.
I love to machine metal (I’m not a professional in it), and there is a visceral joy about making things out of steel and aluminum that’s hard to understand for anyone who hasn’t smelled burning cutting-oil and had to vacuum sharp chips of steel out of your shirt pocket. Machine-shop skills also help to round-out an engineer’s sense of maker capability. But it can be laborious: just trying to zero-out your cutting-bit on a Bridgeport to establish your X-Y reference, so that you can make your cuts to .001″ accuracy — can force you to expend many minutes of detail-work.
Now imagine this: You put on some awkward-feeling goggles, step up to your machine with the blank piece of steel already snuggly clamped-in, and say: “Touch the bit to the right and front edges to establish a zero-reference”. The software controlling your milling-machine uses AI-vision (a term that I use for Artificial Intelligence-based robotic vision) and AI-speech, and a general knowledge of machining and of *this* specific machine, to know to power up, touch your workpiece to establish the point of reference (using a touch-sensor or else just high-resolution AI-vision), and *ask* you if it finds any ambiguity in your request.
But.. it is possible (indeed likely) that it will completely misunderstand what you are wanting. It might take it’s reading off of the wrong edge. Or, what if your work-piece has multiple stepped edges, or some other non-simple shape? This is not a miscommunication, so much as it is a *misunderstanding*. And we can attend to it thusly..
Imagine now that over the right-edge of your work-piece, an image appears of a thing shaped like a pointer, which floats over to the actual edge and seems to touch it, and magical marker-lines appear that just barely touch the corner, leaving zero doubt as to what they are indicating) as a voice asks “Is this what you want to use as your X-zero reference?”
You: “No. The edge just below that.”
The point moves again.
You: “Yes, right there.”
Now the sensor is physically moved there to take it’s reading and then we repeat this process for the Y-reference.
Note that the conversation is instantly clarified — it is placed upon a firmer foundation, simple by pointing to things. This, is Augmented Reality (AR) — so named because you still see the actual world from within your headset. It simply adds things to it.
And the effect, and utility of it, can be fantastic.
Note that this application I describe entails AI (also my specific area of focus, both academically and in practice). My personal passion (that’s the buzzword today, right? “passion”. Really I prefer just to say “this stuff is fun”). This is most useful and fun, when your AI comprehends what’s going on, and applies the questions and the visual-augmentations only where and when needed.
Now imagine that you’re cutting your workpiece. The AI system accessed the blueprint you gave it (downloaded it from your cloud-based data facility) when you said “Get the blueprint for the widget that I drew recently and cut this metal to that shape.” and begins cutting, all the while showing a virtual DRO (Digital Read-Out) with imaginary LEDs flashing the X,Y,Z positions in realtime.
Or it might say: “Your workpiece is too narrow along the Y axis for this item.”
Or it might make a hand or different pointer appear, pointing to the speed-changer levers and say: “You will need to set the speed to 1200 rpm.” or remind you to “Move your tools clear of the cutting-area and ask that person beside you to please don his safety-glasses.”
If there is an object or piece of the machine that would block it’s motion — an A.R. red-flag appears and points to it: “This will obstruct my movement.”
Helping to show you what next to do on this machine (a milling-machine can be quite a complex kit of machinery), or to set your references, guide the cutting, change bits, etc. — these are all radical improvements to this process.
There is one essential point here:
Your incorporation of Augmented-Reality, in conjunction with careful application of AI and good UX-design, can radically transform a complex task into something simpler, faster, and safer. The improvement in productivity can be substantial.
The utility of this could be akin to having a human looking over your shoulder, who has a perfect pointer, who never gets distracted and never fails to identify a safety-situation.
I chose one specific application for AR here because it stands-out as a great need, in my experience — but it is not hard to imagine many other uses. Here I have touched upon some basic augmentations of the user-experience: pointing to things, confirmation of intent, giving a progress-report in realtime, safety, instruction (again, using pointers of various types), and helping the AI-speech conversation via grounding it with actual illustration. If you want to get started with AR/VR you have several vendors to draw upon: Facebook provides the Oculus headset and SDK, and there is also Alphabet Inc., Samsung Electronics Co. and Sony Corp have or are preparing products for this field. I’ll be writing some how-to articles on this topic shortly.
I believe there is substantial fruit here, to be harvested by the visionaries who can see the possibilities and put it into action.