Featured Image

AI-powered Process and Quality Monitoring for Automotive Welding

Weld Spot Analytics (WSA) is a software solution that helps welding engineers in taking faster, more accurate decisions, and increase weld quality while avoiding inefficiencies and reducing wastes. Many are the challenges affecting welding operations...
Mar 28, 2021

Weld Spot Analytics (WSA) is a software solution that helps welding engineers in taking faster, more accurate decisions, and increase weld quality while avoiding inefficiencies and reducing wastes. Many are the challenges affecting welding operations: controllers provide their data only at a cell level, which makes it difficult to cross-correlate data from other cells in order, for example, to detect the quality trend of a part as it moves along the line. Destructive tests represent the only reliable method to identify the ground truth on whether a spot was welded according to quality specifications or not. Unfortunately this process is highly inefficient and there is no tool that can help welding engineers in deciding which part has to be sent for testing. Until now. At its core, Weld Spot Analytics software provides easy access to all welding controllers on the shop floor. No more wasting time sifting through complicated interfaces, all the most useful information can be reached with just a couple of clicks away. Finally, some of the latest machine learning algorithms will analyze the data and help welding engineers in quickly finding their answers.



Transcript:

Stephen LaMarca:

Our first guest and presenter is Matteo Dariol, lead innovation strategist at Bosch Rexroth. Matteo, you want to take it away for us?

Matteo Dariol:

Sounds good. Thank you very much for the introduction. Yeah. So this morning... First of all, let me share my screen. This morning I would like to introduce to everybody some of the research that we have done into... Yeah. So I'll be presenting some of the research that we have done into the field of automated welding specifically. But if we extend the concept that we have found and researched, we can also apply this to generally speaking [inaudible 00:00:49] apply to manufacturing. So some of the takeaways and lessons learned are going to be, rather than for a larger audience. But as I said, specifically I will be presenting some of the use cases around the automotive welding domain.

Matteo Dariol:

And so first off, let me start with... okay. So it's going forward. Okay. Let me start off with some of the key points that I will outline in the first part of the presentation. Usually I keep this slide at the very end of my usual presentation, but I wanted to put it here at first because I wanted to make it clear that those are going to be the building blocks. Right? So the essential components that I'm going to be discussing throughout the first part. And then in the second part, I'm going to be discussing about the most important use case that we are building now. One of the use cases at least.

Matteo Dariol:

So first off, as a good practice, you should always start with why. Why are you doing this? Why is this important? Some of the pain points that we have seen when we were starting doing our explorations are the one that you see on the slide here. You know, destructive testing is way too expensive and time-consuming and there is no other easy way to catch the quality of a weld. You know? But maybe first I should say that what we are discussing now is about capturing the process and quality of welding operations. You know? So how do we do this in the first place? And destructive test is the standard in the industry. You basically take two sheets of metal and then you tear them apart and then you measure the size of a nugget. But we're going to see later in the presentation what does that mean.

Matteo Dariol:

It is challenging also. Another big pain point is a challenging to assess the quality of a part as it moves across the lines. Right? And so you have different stations. In a typical automotive welding operations you might have 400 to 800 robots. And they all do welding. So it is very challenging to follow through and measure the quality of each part as it moves along. And then last but not least, the data that is collected by the welding timers, that are the thing that you see on the right hand side of the screen, are essentially isolated. And so right now at this moment, there is no easy way for any user in the space to really capture the wholistic view of what is going on in the plant. And we found out that welding automotive is really an underserved market in this sense.

Matteo Dariol:

So what does that mean? It means being isolated in terms of data. So as the part moves along the lines as you see on the right or on the left hand side, you have guns on top of the end factor of robots that are doing the weld. The data is then collected by the welding timer controller and then sent to some kind of software that resides at the cell level. Cell, station level. Whatever you have. But the data is not shared, is not sent across. There is no easy way of communicating data across cells, but also across different [inaudible 00:04:31] levels. Those software were built 15, 20, 25 years ago so it's very much a legacy environment. And the trouble for when the engineer is really going station by station typing down on a piece of paper by hand the results of quality and then inferring the decisions based on those observations. It's a very manual process. It's not really 2020 type of process.

Matteo Dariol:

So the first thing that we faced... let me jump in into the first block. The first thing that we faced whenever we decided to tackle this problem was what type of architecture shall we use. What type of software architecture? Is it more usable for this specific problem and to solve this specific problem? And we found out that this were the major ones, the one that you see on the screen, monolithic, microservices, and serverless, and we go through some of the pros and cons right now.

Matteo Dariol:

Monolithic is the old way of doing software. I'm not going to go through all the pros and cons, but you can imagine that you need to have some kind of server or IT infrastructure to run your software. You control the whole execution and the whole data and the whole [inaudible 00:05:54]. Essentially a one unit block that is not so much flexible in that sense and it's very hard to exchange data to and from other softwares. It doesn't really make it easy for smaller team to navigate the space.

Matteo Dariol:

On the other side, microservices is the, I would say, probably the opposite of the spectrum where you have an extreme flexibility in creating smaller services, hence the word microservices, that generate one specific result. So each service or microservice will take care of one execution of the basis logic and then they exchange data just like they would exchange data with other external softwares. And this brings a lot of flexibility and a lot of ease of use in terms of plug and play modules, but it makes it for a big headache when it comes to DevOps and testing. Whenever you deploy your systems, you really need to orchestrate very well everything together.

Matteo Dariol:

And finally, I believe this is the best of both worlds so far. The serverless architectures are what's becoming most popular right now I would say, thanks to the advent of the cloud services provider. And serverless, just like the word says, means you don't use a server. You're shifting your focus from the physical hardware from the network into the logic, into your code. You gain the extreme flexibility of just taking care of your business code, of your business logic, but you don't take care of where the actual code is executed. I don't have too much time to go into the details of this, but just as a takeaway, serverless is at the core of the cloud platforms. And amongst all the other benefits of using a cloud platform provider, this is certainly one of them. And when we discuss the approach to software... I forgot to mention about OAD, AI powered solution and services. So another good reason why we want to use cloud providers, cloud platforms is because there are native solutions and AI services that can help you into the journey to AI in machinery. So from collecting the data into creating the experiments into generating your models and then testing the models and so on and so forth.

Matteo Dariol:

And the advantages are multiple. The first one that comes to my mind is the availability of GPU forms. It will be really challenging for any one customer to build up the same infrastructure in terms of GPU and [inaudible 00:09:00] computation as a cloud provider can give you already, can provide you. And it's really easy to deploy, it's easy to use. It really makes it easy to implement the paradigm of CICD, continuous improvement, continuous development, in your workflow. So that is certainly a major key point, a major advantage for using a serverless cloud architecture.

Matteo Dariol:

The system of review... so let me go over some of the high level overview of our architecture. Just like we said, we start with the data, welding equipment, welding guns, robots. We might also have PLCs and other data sources testing benches. And then the data is sent to an edge component that provides the equipment connectivity and the edge site inference. And this component is really important for any cloud architecture. So I know we discussed a lot about the cloud and sending data to the cloud but there is no cloud without a good edge and so this is a key component for this whole thing to work. Data is then sent to our cloud backend, is ingested. It's run through the main basis logic and then eventually goes into the machine learning services that the cloud platform offers, whether if it's [inaudible 00:10:28] Azure, Google Cloud. It doesn't matter. They all have the same potential. And then from those services what you're going to get is a model. A model is a mathematical presentation of your system. And the next step is going to be really important for you to understand, this closed-loop feedback that we want to have into our system, into our solution.

Matteo Dariol:

So we start from the data from the machine, we collect the data, we generate the models, and then we share the model with the edge. And so in this way, we close the loop. Okay? We close the loop. And then as we continue gathering data, we will always be revising, reimproving, resharing the models down to the edge. And we're going to see this later how does this work. From an edge perspective, we have a [inaudible 00:11:20] that is already existing in your operations, in your shop floor. You have your welding equipment. Typically the welding guns are attached to the end factor of the [inaudible 00:11:30] robot. And then they are executing the welds on the machine parts. And then the welding timers are sending data to the software that is already executing on the line PC or the cell.

Matteo Dariol:

And what's going on is that you will typically have a data gateway that gets the data from your local database. Just like we saw at the beginning, you have your data gateway. You have your local database that is collecting the data in your cell and then you need to have a component that extracts the data and sending the data to something else. In this case, you can see there is an MQTT layer that is getting the data from the gateway or the IOG connector or whatever you have and then exposes an MQTT channel into the edge container. And the edge container is what we see. The edge container is what we have developed.

Matteo Dariol:

As I was saying, so your data is going to the data gateway, is being exposed using a common protocol into an MQTT channel. We then build an edge component onto another server. The edge component we see the MQTT data. And then we finally have our edge logic. So this is really important because there might be something that we want to run on the edge. Just like machine learning models for inference, we might have to do pre-processing and filtering of data. We might have different things that are being necessary at the edge level because you don't want to share all the data. Right? So you can filter it easily. And then if you remember the previous step, you want to receive the data from your cloud and then execute it on the edge.

Matteo Dariol:

What are the main problems with this approach? As you can imagine, with every component that you add, you're going to increase the complexity and you're going to make it harder to troubleshoot problems. So something that we have seen in our experience is that it becomes really really important to monitor every component, every piece of software that you install, and most importantly, to log everything. Because one component might be something that you develop, another component might be something that you took from a third-party. And so it is very hard to troubleshoot and see where the problem happens because all of a sudden, you're not going to have data. So who is the problem? Who is the troublemaker? So you need to go and look at the logs and see component by component where it happened. And something that we have done successfully in our experience is building a series of heartbeats. So heartbeats are essentially the pulse of your software and they give a time signal to our cloud backend saying, "I'm alive. Everything's fine. This is my telemetry." And with this approach, we were really able to troubleshoot our problems much much easier.

Matteo Dariol:

Another thing that I wanted to stress out this morning is the importance of common protocols. So you heard me talking about MQTT. I'm going to glance over this; I don't want to spend too much time. But I'm sure a lot of you know about MQTT and know the importance of common protocols, especially for IUT applications. This is something that... the slide that you see now, it's a scenario that you probably see in your operations. So the moment in any given plant, you have machines and censors exchanging data, you have some censored data going into other bases, some CNC machines sending data to [inaudible 00:15:17] system, some other censored data going to the AI analytics, some other going to the cloud, some more data going to a database and your [inaudible 00:15:28] tool. And all them have typically a, probably, protocol that locks you in, that forces you to implement that protocol and increase even more the complexity of the whole infrastructure. Right? Of the whole network. It would be much much better if there was a center to this communication paradigm, something that everybody can talk to and can receive information from.

Matteo Dariol:

So what I'm drawing right now on the screen is essentially how an MQTT network would work. You submit your data into a broker, into a server and then you subscribe to it as a client to receive the latest information. And this makes it much much easier, more manageable, easier to troubleshoot, and a lot of other good benefits that you can imagine. If you put this in perspective of the [inaudible 00:16:21]95 stack, you can see this being really important. Also whenever you share the data to your ERP system to your MES and others. You'll want to have protocols that are future proof, that are open, that are commonly used that you can find a good community behind that can help you whenever there is a problem.

Matteo Dariol:

And finally, if you remember the very beginning, there were four boxes. The very first one that I want to discuss before the use case. Finally I want to spend a couple of words on the UI and the UX of everything. Why is the UI and UX important? It is important because ultimately this is how your data is going to be exposed to an audience, this is how your users are going to use the information and the data that you create. And you want to make it easier for yourself to really implement a flexible, good-looking UI, user interface. But you also want to create something that is engaging and it's easy, intuitive to use for the user. And so I made the example here on the screen of UJS. Currently UJS is the most commonly used UI framework, JavaScript framework. And it makes a lot of things easier for a developer to deal with. Just like right here, the model view, view model paradigm and other type of modularity paradigms are embedded into this framework and it makes it extremely easy to prototype a new tool and create a new page and then manage everything in a very seamless fashion.

Matteo Dariol:

What is the problem with front end frameworks? As you can see on the screen, the landscape of frameworks changes rapidly so that every three to five years everything has changed, everything's upside down again, and you probably have to learn all you want to see if you need to learn a new one. As you see on the screen, we had the [inaudible 00:18:34] being the lead for a very long time and now [inaudible 00:18:37] is right here in the middle of everything. Nobody's using it anymore so it's going slowly and slowly down. UJS, it's becoming more and more popular but in 2015 there was not a lot of adoption. In 2020 it was by far the most used one. So I guess the point that I'm making here is that this is an extremely fast moving space. You shouldn't spend too much time following everything but maybe just go with the major trend and see what benefit can bring to you.

Matteo Dariol:

The final point here is on the UX. UX is extremely important. In most of the cases, is not regarded as it should. It's considered at the end of a process cycle rather than at the beginning, just like we did. We decided to use some of the basic UX principle since the very beginning into our solution creation because we believe that we can do something better for our users. The points that you see on the screen are some of the key points that we follow throughout our development. So no user manual.You shouldn't make something that is so complex that you need to use a manual. We can make it easy on the user and intuitive so that they can find everything that they need.

Matteo Dariol:

Fewer clicks. It goes very well with the first one right? You don't need to spend 7,8, 10 clicks to find the information that you want. Everything should be there within three maybe four clicks away. And also give to the user a sense of familiarity. So as the user is browsing around to figure out where the information is, where the data is, we should not make them feel lost. We should always tell them where they are in the space, where they're looking for, and give hints on where to find something. And then finally, create a modern look and feel. So there's no reason why we should create another ugly, 1990s industrial automation software, but we want to make something that is more modern, that is just like any other website that you navigate everyday in your private life.

Matteo Dariol:

So with that said, let me go over... I think I'm okay with time. Let me go over the use case that I wanted to present today. The use case is nugget size prediction. So let me circle back to what I said at the beginning. It is really important in order to assess the quality of a resistance weld to determine what is the size of a nugget. The nugget physically is what you see on the screen on the top side of the picture. The nugget is the molten size of metal that is shared across two sheets of metal, two or more sheets of metal. And the diameter of the nugget should be at any time, the minimum diameter I should say actually, at any time within certain limits otherwise the automotive manufacturers should consider that spot not good. And so the most common approach in order to evaluate that quality is to tearing apart the sheets of metal and measuring the size of the nugget. This is how it's called and how it's done destructive testing right now. And so other types of testing, non-destructive testing also called ultrasonic for example, they are used but they are not so reliable just like destructive testing is.

Matteo Dariol:

And so what we can do is we can enhance the data that we collect from our welding controller. So all the electronic features, all the voltage, current, the face angle, all the quality indicators that we can collect. And we can develop a machine learning model that can predict the size of a nugget. And so this is going to lead eventually to less scraps, this is going to lead to less time waste, less tests needed, higher quality because the customer eventually is going to use our welding information for making the decisions faster and also more reliably.

Matteo Dariol:

What you see here on the screen is a very high level representation of what is a model. So as I mentioned earlier, a model is a mathematical presentation of the physical systems. So you have your inputs from your welding controller that go into your physical system and produce the weld. But you also have the same mathematical representation and in a sense, you are then comparing the two and you want to create a model that reduce the error to zero. So this is from a very conceptual, high level view what is going on. From a machine learning point of view, this is the process that we follow whenever we create those things. So you have your data collection, you have your data cleaning and filter extraction, you have your model creation and training, and then you have your evaluation. At the end of this process, you have a model that is evaluated that is created according to the data that you collected.

Matteo Dariol:

But where is the advantage? The advantage is whenever you use new data, because as I said at the beginning, you're going to have a continuous flow of data into the system, you're going to have continuous improvement and continuous development. So you're not going to be stuck with one passage of this model creation but you're going to have continuous data coming in. And what you want to do is eventually you want to retrain and retest your models so that eventually you can create better models based on the larger data sets that you have created, that you have accumulated. And this is the main advantage of using, again, a cloud serverless architecture because you are using an infinite scalable system that allows you to do machine learning very easily.

Matteo Dariol:

So what is going on with our predictions? So the old way of interpreting the quality data in order for guessing the nugget size is the part is being welded by the robot, the weld data is generated and is stored into the welding software, and then there is a human with their own brain that go onto the station, look at the data, and write on a piece of paper just like I saw in many many plants. And then the nugget size prediction is written on a piece of paper and then there is more or less error based on the information that is available to the welding engineers, the quality engineer, to the skill of that person. And so what is wrong with this approach? You have a lower decision confidence and also accuracy. You still need to do all of tests because the nugget size is only measured after the test. The person can decide what to send to testing but until you test, you don't know what is really going on. And this is truly a reactive behavior. There is no point that you can intervene per time in order to detect what is the quality.

Matteo Dariol:

The second approach that is completely different from the other one is with a prediction. So let's assume we already have a machine learning prediction model that predicts the size of the nugget. So the first steps are still the same. You have a car being welded, you have your data collected by the welding software, and then you have this time the data being sent to an AI model that will look at the data, that will look for patterns and will extrapolate the quality information. Right now we have a predicted nugget size that is most certainly what we expect it to be because of all the trainings that we have done, because we have a certain degree of accuracy in the system, more than 90 percent for sure. 90, 95 percent. So our degree of confidence is certainly higher in this case. The users are going to be more confident in relying on the results that we provide.

Matteo Dariol:

And also, testing becomes optional. You don't have to do this every time. You can do this maybe once a day or even less than that. But most importantly now, the predictions are available for the spots. Previously, you only had a measured nugget just for the one that you destroyed, and now you have a predicted value of a nugget for every single spot. So this is an incredible advantage for every operation that want to really see what is going on and predict in a proactive way what is really going on with the quality and with the welding equipment.

Matteo Dariol:

So with that said, let me conclude with some takeaways that hopefully you will be able to get from this presentation. Use future proof protocols: Try to rely on open source standards, open source commonly used protocols and software but make sure that the results are communicative behind and there is also support for it. You don't want to be stuck in something that is not so widely used and you don't know who to ask to whenever something is wrong. Start small: Whenever you're dealing with complex problem solving, AI, machine learning, you don't want to make things too complicated. There is always time to make it complex so start slow, figure out what is the problem, figure out what is that that you need to find out, the question that you're finding out, that you're answering. And then build up your solution based on those assumptions. Understand well the problem: In the majority of cases, this means have a lot of people, a lot of stakeholders sitting in the same table. You don't want to have just the quality people. You're not going to talk just to the IT guys or to the welding engineers. You want to have all of them. You want to have business stakeholders sitting in the same table, making sure that everybody understands the problem, and see what is needed. And everybody must agree on goals and procedures.

Matteo Dariol:

Outsource when necessary: There's a lot of opportunities for third parties to produce outsourcing services, especially when it comes to front end development. And avoid locking effect: I've mentioned many times that cloud platforms are good for you but one of the drawback of cloud platforms are locking effect. If you're using a cloud, you should be aware of this problem and try to mitigate it as much as you can. I think that's the very last point that I wanted to make today so let's open it to questions and thank you very much.

Stephen LaMarca:

Matteo, thanks a lot. That was a really good and simplified explanation of what is seemingly an incredibly complicated or complex use case and concept in general. Thomas, did you want to start with the questions? If not, I've got a few for myself.

Thomas Feldhausen:

No. I definitely have some questions to Steve. Great presentation Matteo. I really liked how you were able to take existing processing data and use your system, your algorithms and apply it in a very new and interesting way. So I see a lot of people moving to cloud deployed architectures. A lot of manufacturers are talking about the power of it. But how do manufacturers make sure their data's safe and secure? What should people be looking at and evaluating when they look at these cloud-based platforms?

Matteo Dariol:

That's an interesting question. If I really have to be honest, the first question that we hear with all of our customers is, "Who wants the data, where do my data go, what happens to my data?" And so the data privacy and data storage and everything revolves around data is always the number one priority and number one things to ask. What I would like to say is there is a number of military organizations, the government, a lot of people that put security in the highest priority of what they do and they are using cloud architectures. This is certainly one thing to consider. So if they are relying on them, why shouldn't you? Who are you not to trust them at the same level as they are?

Matteo Dariol:

The second point is, especially if you are not a large organization, it's going to be very expensive for you in terms of people, in terms of finding the right skills, in terms of execution, building the infrastructure and everything like this to really get up to speed on the latest cybersecurity things. And so one drawback of a cloud platform is the high cost. But the benefit is that you don't care about anything. If there is an alert, if there is an alarm, if there is a found vulnerability in one of [inaudible 00:32:37] or something like this, you don't care about it because they're going to fix it for you right away and you can be sure that they do by contract because that's what they offer. So you're paying the service of the serverless architecture, but also you're paying the convenience of forgetting about all of these problems. And as I said, they are truly the expert from a network standpoint, from a cybersecurity standpoint. And I think why shouldn't you trust them?

Thomas Feldhausen:

Yeah. Completely agree. Great answer.

Stephen LaMarca:

Matteo, my questions are certainly not as good as Thomas' but my first one that I wanted to bring up was back to your slide of when you drew the visualization of a centralized broker is what you called it. Would that impede... would that come with a sort of delay in response when trying to connect from one device to another, having a sort of centralized middleman like that as compared to a direct data connection? Or is that just not an issue in this sort of case?

Matteo Dariol:

That is a good point actually. That is a good point. The point that I wanted to make is we shouldn't use that everywhere every time because... something else that I didn't mention is that if you only have one component, you only have one point of failure. So it can be bad for you if that broker fails and then all communication is lost. So you want to be redundant in that sense. So MQTT specifically is a very lightweight protocol. So the type of messages that are exchanged in this network are extremely lightweight by definition and the type of performance that the MQTT brokers have are very high. They're very very scalable. And in my experience, I have never found an MQTT broker getting stuck processing data because an MQTT broker is essentially like a satellite. It gets the data and then send it down. It's just like a mirror effect. It boinks down into earth and send the data to the client that they want to get the data. So in one sense, I personally have not experienced a lot of drawbacks in terms of performance, but this is certainly something that you should consider whenever you... Like I said earlier, the one point of failure should be something that you should be concerned of. You probably want to have a different separate mechanism for a specific time critical or mission critical application for sure.

Stephen LaMarca:

Okay. Thank you. That does answer my question.

Thomas Feldhausen:

So Matteo, I really like the use case you gave for spot welding where you can use the analytics to actually understand the quality control of these spot welds. And you talked briefly about how destructive testing is really the gold standard nowadays and there's other techniques like ultrasonic. Do you foresee this algorithm, this technique kind of coupled with these up and coming other techniques where you can start... those combined can be just as good if not better as destructive testing?

Matteo Dariol:

For sure. For sure. And this is actually what we are doing now. We are helping our current users, our current customers to validate the results coming from non-destructive tests in order to produce results that are comparable if not even better than destructive tests. So the destructive testing is an activity that is still required by law but the goal is to have everybody using destructive testing just in the terms required by law. No more, no less. Which is very very scarce, very few. Maybe once a month or so. I don't know precisely what is the number but it's a very low number required by law. And so what we want to do is we want to use... we believe there's an incredible potential inside machine learning for enhancing the data and the quality of the data that we produce from our controllers for sure.

Thomas Feldhausen:

No I completely agree. With all manufacturing processes, data's been around for decades and we're really seeing some new technology to make things pretty interesting.

Matteo Dariol:

For sure. Yep.

Stephen LaMarca:

My last one I've got is... so talking about all this data and trying to implement a new technology such as AI, is there a huge investment cost towards computational power required to implement AI in this use case? Or is it just a cost of whoever's using it just needs to pay for the service and everything else is taken care of?

Matteo Dariol:

So one problem with cloud platforms, I'm not sure if I mentioned this already, is cost. Is making sure that you are managing, monitoring the cost that you incur in using the services. But the advantage of that is the majority or services that you're going to end up using are just [inaudible 00:38:08] use. So I'll give you an example. The GPU forms that you use for machine learning model generation are just paid on a time basis. So you create your algorithms and everything, you clean your data and everything. Once you generate the model, it starts to counter and then once it finishes, the counter goes off and you pay for this two hours worth of computation that you request to the cloud. You're essentially requesting GPU computation, hence the serverless. You don't know where the server is, you don't provide the server to [inaudible 00:38:47] and stuff like this but using the servers is provided by the GPU form. And so you are paying for that service. It can be expensive at times. It's really up to you to make it more efficient so that you don't execute for longer. But yeah. Cost monitoring is certainly something that you should keep in mind when using clouds. For sure.

Stephen LaMarca:

Awesome. Well Matteo, this was a really great presentation and thank you for fielding our questions at the end. And so we've got a little bit of time before Jason Jones, but I don't see why we shouldn't just hop right into it. But before we do, I would just like to mention should anybody in our audience have questions, please use the write in... As they come up, instead of interrupting the presenter, not that we had that problem cause we certainly didn't. But as they come up, just write them into the QA and at the end of the presentation we will actually field it. And actually, we just got a question from one of our future presenters from Khershed Cooper of the NSF. Matteo, any fundamental cyber manufacturing challenges we need to consider from a basic research perspective?

Matteo Dariol:

Cyber manufacturing. I'm not sure what is this referring to. Control EMI facturing operation from a cloud? What does that mean specifically?

Stephen LaMarca:

I guess he would mean controlling your operation remotely. Yeah, go ahead and unmute Dr. Cooper if that helps.

Khershed Cooper:

Yes hi. Can you hear me?

Stephen LaMarca:

We got you.

Matteo Dariol:

Yes.

Khershed Cooper:

So we have at National Science Foundation we have a program which deals with cyber manufacturing which is roughly the use of networking and other cloud means to control your processes and things like that. So I was just wondering based on your presentation if you see any areas which need further research at the fundamental level.

Matteo Dariol:

I don't know really about research at the fundamental level like you say. But maybe I can give you some pointers on what is going on now in the industry in the world. There is one trend which is trying to... Most of the companies, well not most. A lot of companies are struggling to find people and to find people that have a certain type of skills when it comes to IT which is necessary for combining this OTIT merge into the industry for [inaudible 00:41:52] journey. And so people that can work in manufacturing and also know about IT are extremely rare to find and companies are struggling to find those people to run their operations. And I've seen many guys your IT guys also your database guys also your... it's also making this front end application and stuff like this. So they're obviously not good at doing that. It would take longer for them because it's a small team and they might make mistakes. So this is one trend.

Matteo Dariol:

The other trend that I wanted to mention is that for discrete manufacturing, it's probably not convenient, not even allowed to manage everything remotely. For process manufacturing, I can probably see this happening. It is common to find a water treatment plant isolated deep in the forest, deep in the jungle somewhere where nobody is on site. In that case, you have your [inaudible 00:42:55] system, you have your remote monitoring control possibilities. But because it's a process. It's a process manufacturing. It's slower in dynamics. Oil plant refinery, stuff like this. You want to be on site but maybe it's a very spread out infrastructure and you want to be able to cover it or manage it from remote. As I said, from discrete, I don't see this really being a trend right now. People want to be there, people want to manage. It's more real time, it's more faster cycle time. I don't see this being too much of a possibility for research.

Stephen LaMarca:

Awesome Matteo. Again, thank you so much for your awesome presentation and fielding our questions afterwards.

Author
Matteo Dariol
Lead Innovation Strategist
Recent technology News
Apple, like Thomas Edison, has essentially created a business model in which they take the ideas of others (like almost every iteration of the light bulb), take credit, and get away with it.
Discover how MTConnect bridges the innovation lag between consumer tech and manufacturing. As a unifying open-source standard, MTConnect streamlines machine communications and fuels emerging tools like digital twins.
For once, history is useful. Hope we are not still paying for that. Is faster better? Printing the big stuff. Barriers to tech adoption.
Do you have a plan to prevent malware from infecting your computer system? Would you plug a random USB into a machine? Find out how to address these issues by watching Season 2 Episode 1 of “Smart(er) Shop,” an IMTS+ Original Series.
One of the biggest threats to a company’s cybersecurity is its employees. Promoting cybersecurity means focusing on reducing human error and promoting cyber hygiene.
Similar News
undefined
Technology
By Stephen LaMarca | Apr 19, 2024

Stagnant talent dilemma. No retirement for Atlas. New tech for an old-people game. ABB found red October. Data irrigation.

6 min
undefined
Intelligence
By Christopher Chidzik | Apr 09, 2024

Orders of manufacturing technology, measured by the U.S. Manufacturing Technology Orders Report published by AMT – The Association For Manufacturing Technology, totaled $343.3 million in February 2024.

5 min
undefined
Technology
By Stephen LaMarca | Mar 22, 2024

High-performance manufacturing. AI chatbot with robotic arm control. Lost & found: Lead in Stanley cups. Common methods for finishing metal parts. ROS development alliance.

6 min