Artificial Intelligence Error Proofing in a Robotic Workcell

The advantages of automating a manufacturing process are well documented. Including machine vision with the automation can benefit the process in many ways. Vision can enhance the robotic process with 2D and 3D part location...
Mar 29, 2021

The advantages of automating a manufacturing process are well documented. Including machine vision with the automation can benefit the process in many ways. Vision can enhance the robotic process with 2D and 3D part location, inspection, and error proofing. This discussion will cover the machine vision error proofing aspects of a robotic workcell. Specifically, the topic will focus on Artificial Intelligence Error Proofing, (AIEP). 

Discussion points: 

  • What can AIEP do, what can it not do? 

  • How will AIEP help the manufacturing process? 

  • How does AI differ from manually taught machine vision error proofing? 

  • Present some AIEP setup examples. 

AIEP can be used to easily and quickly setup error proofing tests in many different aspects of the manufacturing process. Weather it is a new installation or an existing installation, adding error proofing to a robotic system can often improve the manufacturing process and prevent costly issues further down in the chain. Error proofing can reduce the rework cost and scrap rate, saving money throughout the many stages of the manufacturing process.



Transcript:

Joshua Person:

All right. Thank you very much. The Zoom technicalities are now over. So, as I was introduced, my name is Josh Person. I work at FANUC America, a robotics company out of Rochester Hills outside of Detroit, Michigan. Today, I'd like to talk about error proofing in a robotic work cell and I'd like to feature artificial intelligence in that error proofing realm. My brief agenda is, what is error proofing? So, a lot of times people don't even know what error proofing is and they don't know how to apply it.

Joshua Person:

I'm going to go over what error proofing is so that you can kind of understand what you're getting into and you don't get yourself into trouble when you're applying it to your manufacturing process. Then, I'd like to talk about error proofing versus inspection. They both are checking the part or checking the process. Error proofing is looking for mistakes that had to have happened, inspection is looking for cosmetic defects, dimensional tolerances, metrology.

Joshua Person:

So, error proofing is a lot easier for a machine to do than inspection. Maybe error proofing could be considered gross inspection in some ways. I'd like to talk about AI error proofing as well. AI error proofing is the concept of showing the vision system good and bad examples and telling it which one is which, therefore, future examples it can determine whether or not it's going to be put into the good bucket or the bad bucket, the good classification or the bad classification.

Joshua Person:

I'm not going to go over the AI error proofing algorithm. My goal today is talk about how error proofing and AI error proofing can help your manufacturing process. My goal is not to teach you how to write your own AI error proofing algorithms. If you're hoping to write your own AI error proofing algorithms inside your manufacturing process, well, that's awful advanced for what we're doing. You're probably better off going to either your automation manufacturer or a third-party vision system to help you with that.

Joshua Person:

Next, I've got a simple little demo set up showing AI error proofing in action. I've got some parts that look basically the same, whether they're upside down or right side up, and I'm going to show you how I can teach AI error proofing to differentiate between the two different faces. It'll be a few minute video that we go through and I'll narrate it as we go. Then let's talk about adding error proofing to an existing automated process.

Joshua Person:

Maybe you already have a process, maybe you're thinking about having an automated process. Well, if you add error proofing, that should be a relatively cost effective way to improve your process. If you are getting mistakes out there that are costing you money, let's see what we can do to add error proofing to help eliminate those mistakes to save you money. If you already have an existing process, you know which mistakes are getting out there and how much they cost you, so it's pretty easy for you to decide, okay, what do I need to check and how much am I willing to spend?

Joshua Person:

Then I'm going to close on a dedicated error proofing robot. Robots are very good at a lot of different things, material handling, dispense, spot welding. They have quite a range of flexibility. Well, one thing they can do is simply hold a camera and move it around the part and take pictures. So, very easy job for the robot holding a camera and moving around using its six degrees of freedom, but sometimes it's overlooked and you might think the cost of a robot itself would be prohibitive to this process.

Joshua Person:

But then I question what's the cost of these errors getting out to your customers in the end? So maybe the cost of the robot or the system has a pretty easy return on investment. What is error proofing? Well, it can be many things. It can be things like checking to make sure that previous operation happened, checking to make sure it's the correct part presented to the automation or the correct face up, checking to make sure the different characteristics of the part, it's got the right number of holes or the connectors connected.

Joshua Person:

AIt can be lots of different things, but it's not going to be inspection. I'm not touting that error proofing is going to do dimensional tolerances, it's not going to be doing surface flaw or cosmetic things. It's good idea to invest in error proofing and check your part before you put more energy into it. Making bad parts is very expensive, so you can either get that part out of the system or even find the root cause of the error so you don't continue to make bad ones.

Joshua Person:

I've got a simple picture of a [Pokio 00:05:08], that's more of a manual process, but it is error proofing, in this case, the operator. The holder checking if it fits in the green, the gold fits, then it's a good. If the red fits, then it's bad. There's many different things a human operator can do. Well, a robot can do a lot of the same things, maybe not with the same Pokio, but I guess actually maybe it could. It could hold and figure that out.

Joshua Person:

We've also got 2D vision systems, 3D vision systems, or even dedicated sensors. If you've got a process that has a tab on it, the robot picks it up and you want to check to make sure that tab exists, maybe your machine is finicky, it doesn't always punch that tab out, you could just put a simple prox sensor on the robot tooling. As the robots picking the part to load it, is that prox sensor's on, well then you got a good part. If the prox sensor's off, you've got a bad part.

Joshua Person:

So error proofing comes in many different forms. So I like to think about the error proofing scale. Some applications error proofing are going to be easy and some are going to be hard. That's just the peer fact of how the world goes. So an easy one is something you want to focus on. Can you easily improve your process or is it going to be difficult? If it's going to be hard, maybe you want to bypass that all together and come up with a different solution.

Joshua Person:

So I say easy many different ways it can be easy. You want to ask yourself how many different classifications are there? Are there just two? Is it a good part and a bad part? Which bad can be a loaded question? Is it right side up or upside down? Is it binary? Are there only two or is there many classifications? Can it be part A and mistakenly be part B, C, D all the way up to Z? Well, that would be a challenge because what if part G and part R look very similar? You're going to have troubles differentiate between the two.

Joshua Person:

So that's how many classifications and how different are each classification? How alike do the good parts look versus how alike do the bad parts? Is the error always the same. In the lower left-hand side, I've got a thread present application. The error proofing is just determining if the thread is present or not. The upstream machine that threaded this rod did it do its job? It's not looking to see if there's any burrs in the thread. It's not looking at the pitch. It's not looking at the thread count.

Joshua Person:

It's just, are there threads or not? If you're looking for burrs, that might be inspection. And from this point of view, there might be burrs all around the back that you don't even see. So even if you had a great algorithm, you'd never know it. You'd have to change your process and look at all sides. But a simple present, not present threads, that's going to be very easy, because all the threaded rods should look very similar and all the non threaded rods look very similar.

Joshua Person:

The final thing is, what counts as an error? So here, I've got a simulated injection molded machine with the brand name on it and the process, it can have these scratches or these defects in it. So I've got a good one that it says solution there. There's no extra scratches in it. So let's look at some of the surface defects. You've got a big one here, covering the S and covering the W. But I know that each time there's a defect, it's not going to be only across the W in the S and that orientation, it's going to be everywhere.

Joshua Person:

So each classification of what is bad is going to be different. And finally, what counts as an error? My graphic, well, right here in the H, you can see a slight defect, a slight orange line in the last swoosh. So is that an error or not? And to be honest, it kind of depends on whoever's in charge of production that day. He might say, "Nah, that's good enough." Or if it's a smaller area, he might say, "That's good enough."

Joshua Person:

At some point what's good enough and it really boils down to you as a consumer. Would you pass by this product if it had a little tiny scratch in the H or not? And that's up to the manufacturing people to try to figure out what the consumers are going to consider an error, in this cosmetic example. So error proofing versus inspection. So the definitions can overlap a little bit, but error proofing or mistake proofing is really just checking if the operation happened, or if it's the correct operation.

Joshua Person:

Whereas inspection's looking for surface flaws or dimensional tolerances, cosmetic defects. On the left-hand side, I have a simple error proofing application. Where is it the right connector? Do those two connectors go together? And with quick inspection, poor choice of words, visually I can check is the connector the right one? And I can see it's not. So either the orange connector's the wrong one, or the green one is the wrong one. It does not fit.

Joshua Person:

The error proofing isn't looking to make sure the holes are the right size or they're in the right place. They're just looking to make sure it's the right connector. In this case, it is not. When you have inspection, it can take many forms. Here's an example where you have some structured lighting looking at to make sure this panel is formed properly. And this is probably a human operator inspecting this panel.

Joshua Person:

And they're very good at looking at, make sure that all the lines are kind of flowing properly. There's no drastic hiccups in any line. So they're checking to make sure it's a good part, but that's more so inspection, they don't always fail in one way. They might have dents or ripples and the lighting would reflect a little bit differently, but that's a lot more complex than just is it the right panel or not?

Joshua Person:

So I'd like to talk about AI error proofing. So AI error proofing with a 2D or a 3D vision system is where you would show it examples of the good parts and examples of the bad parts or the upside down parts, the wrong part, whatever you're trying to classify, you show it examples and then the AI algorithm is able to compare all the good ones and put them in one classification and all the bad ones and put them in another classification.

Joshua Person:

I'm going to talk about binary classification today, where it's just two classifications, where all the bad parts should look similar and all the good parts should look similar. And of course not everything looks identical. And that's where the AI algorithm is designed to allow some different variation from part to part, but still know what's good and what's bad. But the AI, the artificial intelligence comes from the manual classification.

Joshua Person:

You can't put a part in front of a machine and say, "You decide, you tell me if it's good or not." The machine needs some guidance on what it's going to be. And manual classification where you tell it what's good and what's bad at the beginning, that's a great way to get AI error proofing. So the learning aspect comes in when you train it, what's good and what's bad. And then in the future, a part comes through and the algorithm is unsure whether it is going to be in that classification of good or bad.

Joshua Person:

If it's unsure, it could flag the operator and the operator could come over and manually classify that one and say, "Yep, it is indeed a bad part. Add that to the algorithm." And then you've just improved your algorithm. Now it knows more examples of what bad is going to be, and it's able to differentiate between the two. So once the vision system determines whether it's good or bad, it needs to communicate that to the automation.

Joshua Person:

And there's many different ways to do that because ultimately, it's the automation that needs to decide what to do. The vision system's just a tool to tell it if it's good or bad, does the automation like a robot, pick it up and put it in the reject bin? Does it flip it over? Does it rework it? That's up to the automation itself, the vision system's just kind of guiding it.

Joshua Person:

And a lot of what's in AI error proofing could be done with manual tools that are in many of the different vision packages, either from the robot manufacturer or from a third-party vision system. You can use the tools in that vision system to differentiate between good parts and bad parts, but AI error proofing, that's job is trying to take the expertise away from the vision engineer and do it for him. So with AI, you do not need to have the expert ability to use all the tools. You just use the one tool and let the AI figure it out.

Joshua Person:

Some examples of that I've come across that are good for AI error proofing, starting at the bottom left-hand side. Very, very simple job. You are looking at this product, a little case of damaged nut removers, and you want to make sure that they are all right side up. You do not want them upside down getting onto the store shelves. You want to make sure they're all right side up. So this application isn't inspecting them. Isn't making sure that they're all sharp and burr free.

Joshua Person:

It's just looking, is it upside down or right side up? So there's really two simple classifications, and you can use AI error proofing to show it what the good ones, the right side up on look like, and show it what the bad ones look like, the upside down. Simple enough, very binary. Now moving to the right, the lower right, I've got two wheels where it's looking to make sure it is the correct wheel pattern. And if it is, it passes, if it's not, it fails.

Joshua Person:

So the lighting in this image, the wheels themselves, you can't even tell if they're quality, if they're they're shiny. You can see lots of potential defects on the wheels, but we're not looking at the wheels, inspecting them to make sure that the wheels are correct. We're just looking to make sure, one last check to make sure it is the correct wheel. If you go to the dealership and you buy a vehicle that has three premium wheels, but one of them is the base level wheel, you might be a dissatisfied customer, and that'd be pretty expensive for the dealer or the manufacturer to solve that problem.

Joshua Person:

Now, easy example of a presence of a nut, whether it's on the top or bottom of front or back nut, you can train it what it looks like with one and what it looks like without one. And this application's a little challenging because the metal's shiny. It's dirty. It reflects differently, but the AI algorithms should be able to differentiate between the two examples. Again, we're not looking to make sure that the weld is correct, or making sure that the pitch on the nut is correct.

Joshua Person:

We're just seeing presence or absence. Very, very easy. Finally, another example is the vacuum seal on these pouches of cheese. I don't know about you, but I've opened up my cheese drawer at home and pulled out a brand new brick of cheese and realized it has a poor seal. So then I'm up to the dilemma of is this cheese good or not? And I honestly don't really know which way I lean on that, but I'm much happier if I've opened up a brand new package of cheese and it has a good seal.

Joshua Person:

So AI error proofing could be used, look down from the top, look at the brick of cheese. And the human brain can see the subtle differences. The one on the left, you can see sharp edges all around the rectangular portion of the cheese, nice and flat on top compared to the one on the right. The right has, it's more pillowy, it's more rounded. You can still tell it's cheese in there, but it's a little bit different. So let's let the AI algorithm figure out which one's good and which one's bad just based on more or less how pillowy it is, if I can make up words.

Joshua Person:

Now, I'd like to go through an example of training the AI error proofing. So in this example, I've got parts that are either face up or face down. On the right hand side, you can see the right side up parts and upside down parts. And thanks to my camera angle, you should be able to see a difference. The right side up parts are, this section is flat onto the tabletop, the upside down ones, they're raised a little bit. This one, the edges are concave versus convex. Same for the center.

Joshua Person:

But if you look overhead straight down at a simple black and white camera, it's really hard to differentiate between the two. I can see maybe right here, I can see this one's upside down and right here, this one's right side up, but it's very subtle of a difference. So let me launch my video and go through my video. Sorry about the video quality. I had to reduce it to get it to go through this Zoom.

Joshua Person:

So here is the iRVision set up screen for FANUC robots. I've got right side up and upside down, live image now, get rid of my tags, and let's put a part in the field of view and we're going to train it what the right side up part looks like. So it's roughly in the center of the field of view, draw a box. First, I'm going to name it. This one can be up. So the cyan colored one is going to be up and the orange one's going to be down.

Joshua Person:

And I remember this one is up. So I draw a box around what I want to look at and the entire part. So then I can initiate the learning algorithm. And going quickly. But what I'm going to do is I'm going to snap some new images and give new examples here as we speak. So here's the first example, it is live. I'm going to find the part and add it to my algorithm. Next, I'm going to find the part, add it. I'm going to do that a few times. Give us some examples.

Joshua Person:

Here, I'll put two parts and these are still all right side up. I haven't flipped them over yet. Get them in the field of view, add them, and move them around a little bit, get some variability in what the parts look like. And I could add different parts, not just these two. I could go from there. Now I'm going to flip over the part. Looks about the same to me, but it is indeed upside down. I'm going to find it, move it around, get it in the field of view, get in the field of view. There we go and find it.

Joshua Person:

Add a couple parts. So in this case, in the initial case, you just need a few of each example, but once you're actually putting this in production, you might want more examples, really depends on how robust your different parts are. So here is a list of the first six. And I remember the first six are all right side up and I could analyze the image to verify that with my own eyes. But I just remember I did six in a row, right side up and the next page, coincidentally, they're all going to be upside down. So the next four, I had it upside down.

Joshua Person:

So manually classifying it. So the algorithm knows examples of what looks like what. Hit next, it's going to train. And it's not too sure that I did these correctly. It's asking, "Are you sure those are upside down?" I'm like, "Yes, yes, I'm sure." So now that the algorithm is done, if I do a snap and find it's going to find these two and know for sure that they're both upside down, change the scene a little bit, flip them over. So now these two should be cyan and right side up.

Joshua Person:

So it's able to find them upside down or right side up. Let's go to another example. Different parts and do a snap and find. Well, one of the parts, it knows it's right side up, but it's confidence level is 38% out of 100. So maybe I could improve that. So I manually add this one and I tell the algorithm, "This is also what a good one looks like." And then it asks me, "Are you sure about those two?" So now I just improve my algorithm. And it found that when it's much more confident.

Joshua Person:

One last time, I'll flip them over to make sure it works. Snap and fine, and now it knows these two are upside down. So, that's the end of this example. The idea is showing a few examples, it should be able to differentiate between upside down, right side up, good or bad, as long as you're limiting your number of classes. And as long as most of the right side up one look alike and most of the upside down ones looks alike, you can set up your error proofing in such a way that the algorithm figures it out for you.

Joshua Person:

So why would you want to add error proofing to your automated work cell? My first statement is pretty contradictory. It's both very accurate and very much a lie. In manual manufacturing processes, the human operator is very good at spotting gross errors, and that is indeed true that the human brain can spot these errors, that they're very good at it. But if anybody's worked in any manufacturing facility with manual operators, they know after a long weekend or a long shift or a midnight shift, maybe the human operator isn't as good as spotting the gross errors.

Joshua Person:

So they're good at it, but maybe they're not always good at it. Well, a robotic system doesn't have the ability of using the human eye and the human brain. So you need to set up in the error proofing yourself. You need to figure out where it's needed and how to set it up. So the robot is told to do its job over and over again every day. And if you don't teach it what to do when there's something wrong, well, then you've got a hole in your process.

Joshua Person:

So, if you allow your errored part or your bad part to continue through the process and continue to add value to it, that could be very costly. It's much more efficient to find it right away. And also if you catch errors quickly, you can get the root cause corrected so you're not making hundreds or thousands of bad parts before you catch it and go back in and fix it. So, many automated manufacturing areas, the process doesn't have any human eyes on it from start to once it's packaged and shipped away.

Joshua Person:

So if there's no human eyes on it, you need your robot and your automation system to be able to determine whether it's with cameras or other processes, whether the parts are indeed good or not. Here's a simple example of where on the left-hand side, where the robot's picking these little disks out of a bin, and the disks can be upside down or right side up. The robot would pick them and load a machine. Well, if it loads the machine, if the part's upside down, the process fails, the operator has to come over and fix the problem.

Joshua Person:

So they added a camera here, the robot picks it, flies by this camera, takes a picture, and the camera tells the robot whether it's got the part in the gripper right side up or upside down. If it's right side up, it'll load the machine. If it's upside down, it'll take it to a turnover station, flip it over, regrab it, and load the machine. Another very general application for error proofing on the right-hand side, just a big giant robot spot welding on automotive body.

Joshua Person:

Well, maybe you could have a camera overhead to determine if that part has a sunroof or not. The system itself is supposed to already know whether that has a sunroof, but I'd hate for the robot to go in and try to go in through the sunroof and try to weld around the edges of the sunroof if it didn't exist. That might be a rather loud, your supervisor might come over and wonder what all the racket's all about. So throw in a camera over there to look for a sunroom or look to make sure it's a two door versus a four door.

Joshua Person:

It should always be right, but in manufacturing, it's not always right. You might want to mistake proof it or error check it. So you have opportunities in your automated process to add error proofing, probably with very little investment. Here is an example of a robot that already has a vision system on it. On the left-hand side, it's looking down at these little red and blue automotive fuses. It picks up the red ones, puts it in one half of the shipping container, picks up the blue ones and puts them in the other half.

Joshua Person:

So it's already got a vision system. It's picking them and dropping them very fast. The hopper keeps it fed and the little process runs and runs and runs. Well, what if the process, the actual fuse part of it, this little U shaped device here that's for the fuse, what if, because of the process, that thing is missing sometimes? Well, I need to buy a package of fuses that had already all been blown, kind of defeats the purpose.

Joshua Person:

So I'm not a fuse manufacturer. I doubt that is a real problem, but no, it could be. So instead of just finding it and picking the blue one and putting the blue one in the blue side or the red one and the red side, let's add a process to make sure that that centerpiece is indeed there. If it's not there, communicate to that robot. Instead of loading it into the customer's package, you can throw it into the reject bin. And then if you find out that that's happening quite a bit, then you can fix the upstream process to figure out why that's happened.

Joshua Person:

So it's an existing process. You already got a camera, you've got a robot, you've got a process. Let's add value to it just with adding a couple of extra tools, probably a very low investment. Now another simple example on the right-hand side, the robots picking up these makeup compacts and loading them into a three by three array. They come down, a camera looks at them, finds them and puts them in. Well, maybe the camera can be trained if it's upside down, do something else with it, or if it's the wrong product, do something else with it.

Joshua Person:

It's a little late now to start inspecting to make sure that the makeup inside is all good and it's got the correct brush. But a simple already in process. Let's check one last time to make sure we're not loading all these upside down because you don't want them on the store shelves with eight of them right side up and one of them upside down. So coming from a robot company, one of my favorite things to do is let's have a robot dedicated to error proofing.

Joshua Person:

Let's add a camera to the robot in here. In my example, it's a collaborative FANUC robot. Inside of this black enclosure is a camera and its job is to look around the seat for different errors. The customer identified the clips, the holes, the brackets, the different objects that they have troubles with. And then you can add a robot to go around and take pictures of them. So the initial investment of this system probably is a lot cheaper than producing a few errors.

Joshua Person:

If you're loading the vehicle with the heated seats unplugged, and you're getting them all over the country that have heated seats that don't work, somebody is probably going to have very expensive problem. All of a sudden, the cost of this robot and this vision system it's not really that bad, especially since this is example of a collaborative robot that can interact with people. It does not have to have the safety fencing that the robot behind it has safety fencing around it.

Joshua Person:

But this robot is allowed to move along with people. It won't run into them and hurt them. So you can have error proofing, just a simple robot. The robot can hold the camera, position it in six degrees of freedom, move it around and determine what's good or what's bad. And then communicate that back to the cell interface so that it can be taken care of. In summary, adding error proofing to a robotic work cell, it's an excellent way to add value to your manufacturing process.

Joshua Person:

And just please understand the error proofing and part inspection are often two different things. Often people want to buy a affordable error proofing system to do their inspection because more times than not, a dedicated inspection system is going to be costly with dedicated lights and dedicated sensors. So you have a complex problem that you're trying to solve cheaply isn't always going to work out for you.

Joshua Person:

And AI error proofing is a great way to distinguish between multiple classes automatically, or those classes are good and bad. Part one, part two, just whatever classifications your part fit in. And adding AI error proofing is a very quick return on investment for some of your easier AI error proofing or your error proofing applications. And why not just add a robot solely to do the error proofing? It can handle it. It can work alongside its other robots, and you can have a much better manufacturing process.

Joshua Person:

So I'd like to thank you guys for attending this virtual conference and listening to me talk about AI error proofing. In closing then we'll get to questions and answers, just a simple work cell I worked on for a show handling these toy cars and inspecting them. You can see the camera flashing its red light, inspecting all around the car to make sure it's got things like the correct wheel, the correct roof.

Joshua Person:

Make sure it's got a license plate, make sure the interior's put in, just a simple application on a toy car which would represent a real car with a larger robot. In this case, the robot is showing you that it's missing a license plate. So some fun with robots that I get to have on my daily job. So that's it for me. I'd like to hear some of the questions that we have from the panel. Anybody have any questions?

Stephen LaMarca:

Josh, you're in luck. We've got a slew of questions for you. Let me cue them up. Our first one that came in from Isaac. For example, with the damaged nut remover case, am I correct in assuming that the vision system does not necessarily need to be taught the individual zones that hold each individual remover, but instead identifies areas which individual removers are present and are all similar?

Joshua Person:

Yeah. You're right. I talked about one of them was good or bad in one of the zones, but your software could look in the array. I think it was like two by five and determine if all of them are good or bad and communicate that back to the cell controller that the 14 millimeter one is indeed the one that is bad. So yes, you can let the software figure it out for itself and communicate it back.

Stephen LaMarca:

Cool. One of my initial questions was, what are some of the initial limitations of the software and inspection system? Is it something like, size, color, shape, or can those all be ironed out by the software?

Joshua Person:

Yeah. That's a bit of a loaded question because all of those variables come into play, and that kind of goes into my slide about easy versus hard. There are easy applications where the size is drastically different and especially with a 2D camera, you can easily determine whether it's the right size or not. But also in some applications, you might have one micrometer difference is going to be good versus bad. So, that's a completely different set of tools, you need specialty cameras.

Joshua Person:

I like to think the AI error proofing is, rule of thumb, if the human eye can differentiate pretty easily, the AI error proofing can. But the human eye isn't great at sub-millimeter part dimensioning.

Stephen LaMarca:

I got to say with talking about the human eye when you brought up the slide with the brackets being upside down or not, I'd like to think my vision's pretty good, that I've got pretty awesome vision actually. But there were some of those pictures where I thought, "I don't know how a robot can figure out which one's upside down or not, because in this picture, I can't tell which one's upside down or not."

Stephen LaMarca:

I don't know if you've seen it. There's actually a meme going around social media right now that's sparking a lot of debate as to whether or not in a picture of an avocado sliced in half, whether or not the pits there or not. And I feel like this is exactly where we need a FANUC vision system to tell us whether or not that pits there. Rebecca, what did you think?

Rebecca Kurfess:

I thought it was great. I can definitely see where this would add value to the process. Like you said, if you can pick out parts that are defective before you add more value to them, long-term, you're saving your company money and you can hopefully figure out the root cause. I was curious, so, you mentioned that this is not the same thing as inspection, which I definitely appreciate.

Rebecca Kurfess:

But do you think there's any space for using the data from error proofing in some sort of digital twin, especially in parts that are maybe lower criticality? So I think there was a previous presentation that said, nuts and bolts generally don't have to be super up to spec and inspected with all these data sheets that come with them. Do you see any value or any space for the error proofing software in that capability?

Joshua Person:

Yes. The error proofing is good at, let's say, gross inspection. So, the definition of the two kind of overlap a little bit. And so really if it's gross inspection, or if you want to check something specific, maybe it is check the microscopic differences. Maybe you'd have to have a camera focused so much that those microscopic differences are actually in the field of view much larger and no longer microscopic. So they can do better what the eye can do.

Joshua Person:

But inspection itself, there's lots of lots of different applications that, are you looking at weld quality? Are you looking at the dispense speed quality? There's dedicated things in there. And the AI error proofing, especially from a 2D overhead example, it can make sure that the bead is there and probably make sure it's the right width. But it can't check if it's got too much air or if it's actually going inside where it's supposed to be. So you have to really kind of be careful about what you sign up for.

Rebecca Kurfess:

That makes sense. We have another question from Joseph. Does the AI error proofing option come with the standard iRVision 2D package?

Joshua Person:

Yes, that's a very good question. The AI error proofing is an adder to the 2D package. So he's referencing FANUC integrated vision system, iRVision. And really it's bread and butter is robot guidance. Our package is to find parts and move to them. So the AI error proofing is kind of a bonus adder, but it doesn't come directly with the package itself. There's an upcharge.

Rebecca Kurfess:

Okay. Thank you.

Stephen LaMarca:

My next question is how important is the environment for the vision system? Does the vision system need to be ... can it just be bolted to any robot arm or does it need to be in a specific booth or clean room with like white walls around it or matte textures, so it can properly verify the surfaces?

Joshua Person:

That's a good question. And the variability is so much that it really depends on the application. You look at a robot itself, some of them have to be explosion proof because they are in an environment that requires it. Some of them, some of the plants don't even have walls, they just have a ceiling over top of them. And that is okay. So same with vision systems. And in general, you want to control your environment with a vision system, especially sunlight can ... we all know it can be pretty bright.

Joshua Person:

You want to maybe shroud that. But some applications are so easy that it really doesn't matter. And some applications you really have to control your environment, things change. And I don't want your work cell to run the best at nighttime and struggled during the day on sunny days. So if that is in your application, maybe you have to control that whether it's with a vision system or just a robot in general.

Stephen LaMarca:

How large of an IT infrastructure does technology like this need? For example, would you need a high bandwidth server or network to handle this kind of tech?

Joshua Person:

The examples I showed with FANUC's iRVision system, it is all done on the robot controller. The robot controller is the computer, roughly speaking, that controls the robot. It tells the servo amplifier how much to move each joint, where to go, it controls the motion and the operating system. So the robot controller is very generically speaking the computer that controls the robot, the AI algorithm and the entire iRVision software runs on that.

Joshua Person:

So there isn't any additional hardware or software that you need to go, but you can also buy a third-party vision system that maybe it runs on a PC, or maybe it runs on a smart camera, and that would use its horsepower. And it would just communicate to the automation. Yep. It's good or bad. The automation would listen. So generally, you don't need a lot of bandwidth, but maybe there's some AI algorithms out there that are way, way more powerful.

Joshua Person:

Maybe they're connected to the Google cloud and have millions of images. So what I showed was very simple and it handles a lot of the easy to medium applications. If you want to get complicated, you can sign up and open up your checkbook.

Stephen LaMarca:

Fair enough. And what types of skills or training is required to get somebody like an operator, a technician up to speed to use such equipment like this? I take it that might be a loaded question too and that it might vary on what kind of robot is employed.

Joshua Person:

Well, one thing as a robot company and all the robot companies, our goal is to make them as user-friendly as you can. If the robot takes a PhD in robotics to run on the factory floor, it's not going to be successful. You need a robot system that can be run by either skilled trades, if there's any problems, or operators on the day-to-day stuff. So same with this error proofing stuff, it is designed to be run by operators. Operator can be trained to add new examples and manually classify them.

Joshua Person:

So somewhere in between, the operator can run it day to day. The skilled trades should be able to perform maintenance and train new things. And then maybe you need a robot person on call to do the drastic stuff. And that's not only vision or error proofing. It's just your entire automated work cell. You kind of need to have different levels of expertise. But definitely to be successful, we spend lots of effort to make this stuff user-friendly.

Stephen LaMarca:

Very well. And I think we've got one more question that came in from Isaac, again, is that-

Rebecca Kurfess:

Sorry. There was one that was there originally, but I didn't see because sometimes I struggle with reading. Where do you see unsupervised training effective in machine vision?

Joshua Person:

The unsupervised training, it's a lot more complicated. There's a lot more going on. And I'm not a general artificial intelligent machine vision expert. I focus more on our products and our AI, but there is lots of think tanks, lots of works on how can these machines figure stuff out on their own? And one of them is the supervised training. This is good, this is bad. But the other one is figuring out on your own, but that data has to come from somewhere. The inputs have to come from somewhere.

Joshua Person:

So how it's trained, whether it's supervised or unsupervised, there's a lot to it. And sure, once we figure out even more and more into the artificial intelligence and these machines are able to think better, we're going to have even better processes. So, yes, I see it as a big thing in the future. And lots of smart people are putting lots of effort towards it.

Rebecca Kurfess:

Thanks. Actually, I have a semi-related question. So, in your experience, how much time does it usually take for these systems, not only to be installed and trained, but to be running optimally? So where an operator wouldn't really have to go in very frequently to recheck parts and confirm or deny that they're good or bad. And how do you find that the setting affects that? So if you are in a facility where you can see easily if it's daytime or nighttime, how do you find that that affects the timescale of this implementation?

Joshua Person:

It really depends on how many classifications you have. If there's two, like I showed, upside down and right side up, and how different the two classifications are. And the third thing is, how alike is everything within the same classification? So if you're making a part and all the good ones look perfect, and all the bad ones look bad in the same way, you could honestly train three or four and it's never going to fail.

Joshua Person:

But if your part is, like the vacuum seal on the cheese, all the bad ones don't look alike and all the good ones don't look alike. You've got these things in the gray area that you have to kind of manually take care of. So either three examples of each, and you're good to go forever, or maybe a whole shift of supervised learning. So in the beginning, you know you have to have the operator, pay him some overtime, sit in there and supervise the learning.

Joshua Person:

And then after that, maybe that operator is free to go to do other tasks, his standard job. And once a day, the light comes on or something, or then now, all of a sudden, it's once a week and then it's once a month. So, the expectations that it's never going to need to be learned really depends on the part. Is the part changing over time? Well, then the algorithm has to change too. It doesn't morph all by itself.

Stephen LaMarca:

I think that's a really fascinating point because I've in the past few years and actually fast few generations of smartphone, even like Google pixel cameras have been employing AI machine learning for taking pictures. And it's cool because like, when I got my first Google pixel phone, well, I got an email with some instructions how to utilize and make the best of the camera using the machine learning software.

Stephen LaMarca:

And it was basically, when you take a picture of something, take a handful of pictures, take three or four pictures and delete the ones you don't like and do that every time you take a picture, and eventually the software and the phone will learn which ones you don't like. And I'm on my fifth generation pixel now, and it's still learning from all of the ones that I started with the first generation of pixel. And I don't need to do that process anymore.

Stephen LaMarca:

Now I take one picture and usually it's done. But sometimes, I'll take a second one and then of course delete the one I don't like, and it continues to learn. So it's wild how we're seeing that process in machine learning be applied to vision systems. And it's really cool now seeing it applied to industrial robots and kind of nice that I have it in my pocket too.

Joshua Person:

The pixel phone wouldn't have known which a good picture was in the beginning if you didn't tell it. And you told it by saving the one you liked and deleting the ones you didn't like, and you were the supervised operator.

Stephen LaMarca:

Yes, I am a technician. I feel totally qualified now. Our last question from Isaac again is, is the AI error proofing also an add on for the 3D L 3D V 3D AS systems?

Joshua Person:

Yes. Isaac's asking about the 3D products that FANUC offers. So we can find parts in 3D or 2D, depending on the application. And so the 3D use 3D sensors to locate a part, give you the XYZ [YA 00:48:38] pitch role location of the part. And in the iRVision product, you can add for a minor upcharge the AI error proofing to any of those packages.

Stephen LaMarca:

All right, Josh. Well, this has been a real pleasure. And it's fascinating learning about machine learning and also how it's being implemented on the industry standard robotics company.

Joshua Person:

No, I love the opportunity. I appreciate you guys putting this together for me.

PicturePicture
Author
Joshua Person
Senior Engineer
Recent technology News
Stagnant talent dilemma. No retirement for Atlas. New tech for an old-people game. ABB found red October. Data irrigation.
"Vanguard of Automation". Schneider Invests in US Manufacturing. The Simple Part of LLMs. Brush Off Those Sperrys. 3D Fabric Weaving.
High-performance manufacturing. AI chatbot with robotic arm control. Lost & found: Lead in Stanley cups. Common methods for finishing metal parts. ROS development alliance.
A new bot has joined the chat. Versatile robotics control. Advances in multi-material AM. Everybody works better in denim.
The organizations favor a collaborative partnership as independent entities to enhance operational performance, responsiveness, and strategic flexibility.
Similar News
undefined
Technology
By Benjamin Moses | Apr 19, 2024

Episode 116: The gang shares their love for amusement parks. Stephen is happy to announce that there are a lot of testbed updates. Elissa presents further evidence that Elon Musk is dumb. Ben closes with an allegedly new method of 3D printing.

29 min
undefined
Technology
By Kristin Bartschi | Feb 29, 2024

Event to Connect Small and Medium Manufacturers with Experts in Smart Technologies

5 min
undefined
Technology
By Stephen LaMarca | Feb 09, 2024

ChemGPT. Terminator T-1000 IRL. 3D printed logic modules. A new occupational category. Manual versus automatic CNC.

6 min