TechTalk Introduction to Computer Vision for Businesses (Part 1)
As the use of artificial intelligence grows and evolves, many organizations are exploring ways to put computer vision to work.
By Insight Editor / 19 Mar 2021 / Topics: Data and AI
By Insight Editor / 19 Mar 2021 / Topics: Data and AI
Computer vision applications leverage data patterns to improve logistics, bolster safety, enhance quality and more. In part one of this three-part series, Insight Digital Innovation’s Amol Ajgaonkar, Ken Seier and Ben Kotvis discuss multiple use cases that demonstrate how computer vision gives businesses a competitive edge.
To experience this week’s episode, listen on the player above, watch the conversation below, or scroll down to read a complete transcript. You can also subscribe to Insight TechTalk on Apple Podcasts, Pandora, and Spotify.
Audio transcript:
Published March 18, 2021
BEN
Hello everyone and welcome to today's Tech Talk. An intro to computer vision for businesses. I'm your host Ben Kotvis Architect of IOT. And joining me today are my colleagues Amol Ajgaonkar, CTO of intelligent edge and Ken's Seier, the national practice lead for data and AI. Thank you both for being here.
KEN
Thank you for having us.
AMOL
Thanks Ben.
BEN
As we know, computer vision is a technology that's been gaining ground recently as businesses continue to explore innovative applications for AI. In this three-part series, we'll be taking a look at some specific industry use cases along with some practical and technical considerations, for those who might be interested in how this technology can give their businesses an edge.
But for today, let's get started with some of the big questions. First off, for those who may not be familiar with the technology, what does computer vision? Ken, how do we define it?
KEN
Sure, computer vision is a form of artificial intelligence and while that's a fairly misunderstood and fancy word, artificial intelligence is just using math to understand patterns in historical data and to apply it to the present and the future.
So, in this case, we're talking about using that math to understand what might be in an image or a set of videos. What we do to build a computer vision model, is we take a number of images or section of video. We annotate it to understand what we're trying to identify or recognize or find. We use that math to create or to understand the patterns in that video and to predict in future video or imagery whether or not that pattern is recurring.
So, we might be predicting say whether or not an intersection is busy, or a light is red for a self-driving car. We might be predicting whether a part on an assembly line is good or maybe it doesn't meet standards, or it might be trying to predict whether or not a certain zone is safe for humans to operate in. In all those cases we're analyzing historical video, training a model and then using that model to predict an outcome or a classification in present and future images.
BEN
Thanks Ken, a lot of people may be familiar with computer vision in the form of self-driving cars, which rely on hundreds of thousands of computer vision models to identify traffic signs, cars and pedestrians. But not all computer vision applications are that complex. So, once we've created a model, how do we implement it? Amol, what devices and technologies are involved in a computer vision system?
AMOL
So, like I said, not all computer vision solutions are complex. Once you have the model, the idea being that you have to run the model and you could run the model in the cloud or you could run the model at the edge. So, and when I say edge, it's just another compute device, that's closer to where the input is being generated. So, wherever your video camera is or your other surveillance systems are, we bring the compute right there. And how do we run the model in that edge device so that you get reduced latency you get inferences quickly. But the other part of operationalizing these models, is to be able to update these models remotely as well.
So, imagine you've deployed this in 10 different locations. You've updated your model now with a new data set. And now you want to update this model in all those 10 locations, how do you do that? So, using the cloud and the hyperscalers and putting your solution device management solution there and having your models run at the edge, you are then able to update those models and put a complete MLOps Pipeline in place so that when you collect new data you train the model, you have a new version. You want to test the model again and maybe test it in select locations. You are able to then deploy it back onto the edge, test it, look at the inferences, decide which one is better and then make the switch for all the locations. So, operationalizing an AI solution or a computer vision model is critical to the success of the entire solution itself.
BEN
In the past few years, we've seen more and more use cases for computer vision coming from different industries. We'll get into some more specific examples in part two of the series, but on a high level, why are organizations investing in computer vision these days? Are there common threads in terms of benefits or business outcomes?
Why don't we start with you Ken?
KEN
Sure, computer vision is a really powerful and flexible tool and can be applied to a wide variety of use cases. And computers are really good at executing the same task over and over again with however as human operators can experience attention or cognitive or decision fatigue. This gives us an opportunity to apply that consistency to human decisioning processes to enhance human operation or human safety. And in some cases where we can fully automate the responsibility to a computer vision model, it frees human workers to focus on more complicated and valuable tasks that AI can't approach. We're seeing computer vision across industries providing incremental improvements to safety, to quality, to top and bottom-line operations, logistics and those small incremental benefits sometimes 1% benefits, they really add up over time leading companies to need to operationalize more AI.
BEN
Speaking of operationalization, Amol, can you touch on that piece of the computing on the edge?
AMOL
Absolutely, so, once you've identified the use case, like Ken said, and you've built the model for it, operationalizing that at the edge does a couple of things. One is it reduces your latency. So, your response time of sending the image frame to the cloud or somewhere else like a data center, analyzing it, getting the inferences back in a meaningful manner so that the people on the floor right at that place can actually take action on it. That's the end goal. So, bringing the model execution at the edge reduces that latency.
Second is it also reduces the costs in terms of storage. So, if a video is notorious for that, right? So, if you want to send a lot of video data to the cloud, first of all, there's latency. It's going to take up some bandwidth. So, you're going to pay for the bandwidth plus the storage costs. So, you could reduce that by actually having storage on prem near the edge itself. Have a lesser impact on your network bandwidth so that you're not hogging the bandwidth for just uploading the video is happening in the local loop on a local network. So, it's quicker. And when you have, when you put all of these together along with the 1% increase or decrease as Ken mentioned, holistically it really accelerates the value of that solution, right?
So, you're looking at the business aspects of it. You're looking at the outcomes and then you're saying, okay what if I operationalize it this way I'm going to get more savings, right? So, you're accelerating, or you're increasing the value of your solution by making sure that you've saved every bit of money executing that model and making sure that you get the results as quick as possible.
BEN
Any emerging technology is bound to come with some general misconceptions. In your experience what are some things business leaders or employees may not fully understand about computer vision? Are there any common questions or concerns you've encountered?
Amol, let's start with you.
AMOL
Sure. I think the misconception about the computer vision models is I feel that the people think it's magic. It'll just happen, but it's literally and Ken has mentioned that to me so many different times in so many different ways is garbage in garbage out. So, the training or the collection of that training data is extremely useful and that does require time. So even though there are models in the market, and you could obviously start with them, but making it yours, making it very particular so that it works for your use case with your lighting condition to gather the right data, It becomes super important. So that we can train the model and then be able to deploy that model. So, testing and managing all of the models at scale is also another misconception where people think it's one and done. You build a model, you deploy it, and they're all set.
I think it's a lot more than that to make it viable solution, especially the managing the model that scale. So, once you have multiple versions of those models how do you manage those? How do you know which model is the right one? Which one is deployed in which location at which edge for what use case? And being able to do that at scale is extremely important. So, if customers think of a computer vision solution or any ed solution holistically all of these will come together.
BEN
And Ken, are there common questions or concerns that you've encountered?
KEN
Sure, so modern AI is a very narrow version of intelligence. It's not the broad thinking intelligence that we see say in science fiction. And computers learn to identify and classify objects very differently from humans. They can be very good at it. They can be very consistent at it, but they can't go outside the bounds of what they've been taught to recognize. And that means they make very different and interesting kinds of mistakes where human wouldn't. And as Amol has been talking about it's very important to gather the correct type of data. It's very important to assemble that data into a properly weighted training set and to train the model, using a complete range of experiences that it would have in operation. That's very easy to do and say, a plant or assembly line with that environment is very controlled. It can be more challenging in computer vision. And there's a great case about an early example of a self-driving car under human operation. It was just being tested. It went by a car carrier, one of the big trucks with nine or 10 cars on the back, and it completely lost its mind. It could not figure out how 10 cars were all stacked up on each other. And that's the kind of importance that we have to put on that training set.
But it goes so far beyond that. Like Amol has been talking about having that mature production realization, being able to retrain a model, to understand when that model's falling out of tune, and then all the way into the human and business processes 'cause you want to make sure that whatever you implement for artificial intelligence that it's responsible and that you're following principles of responsible AI.
BEN
Ken you just mentioned responsible AI? Can you talk a little bit more about how it relates to computer vision?
KEN
Absolutely we've been talking about the impact of AI in terms of business context. Top and bottom-line benefits, operational efficiencies. These are all very important, but at the same time we have to understand that these types of 1% improvements also affect human lives. We always need to be thinking about humans who are going to be impacted by AI both positively and negatively. We're talking about using AI in work environments. We talk about using AI in healthcare. We talk about using it in security and safety. We need to make sure that whatever we implement is going to have an impact across the entire population that we're serving and that we're not serving that is positive. It's a very powerful tool. It moves very quickly, it's very automated as it reaches maturity.
And so, we need to be thinking about the security of our AI and the underlying data. We need to think about bias and inclusion in our dataset and how this will impact different populations. We need to be thinking about how we intend our AI model to be used and then how it could be used differently that might be unfair or biased. We need to talk about the risk of a false positive, or a false negative. And we need to make sure that whenever an impactful human decision is being made that there's an opportunity to bring a human into the loop a human override. That if the AI model makes a mistake, if it's inaccurate, if it was fed bad data or maybe it wasn't trained properly for this particular case that a human being has the responsibility and the awareness to come into the decision-making process and make a better decision for the humans being impacted. AI is so powerful. We have to make sure that we always remember that AI is in service of people.
BEN
So, our last question for today, and Ken we'll start with you. As we start to look ahead what excites you the most about computer vision and what role do you see playing in modern businesses over the next one to three years?
KEN
So, artificial intelligence and especially computer vision because it's so flexible. It has the power to drive so many changes, better quality, lower cost goods and services really dynamically reshape and disrupt markets. It only takes 1% improvement for a company to disrupt its marketplace. We want to be working with these companies in order to capture the best intelligence of their SMEs and augment and enhance their entire workforce using those SMEs and artificial intelligence. We want to be making sure that we're pointed at improving quality of life, the health, the safety, the wellbeing of humans, both for these companies and that consume from these companies. And we really want to be making sure that we're doing that in an iterative responsible way, so the companies can continue to build more and more responsible AI to drive that value.
BEN
Amol.
AMOL
And I think from my perspective, as Ken mentioned, as we build more of these solutions, bringing them down to the edge so that people see our customers see the most value, get the most value out of those solutions. And be able to respond using the inferences make better decisions. And they use their valuable time. The human workforce uses the valuable time to do more valuable tasks, right? Where the talent and the skill is most used versus making the same decisions over and over again by just looking and deciding whether it's right or wrong from a product point of view. I think intelligent edge is the new cloud and bringing those workloads back to that edge to do the really time sensitive processing and then using cloud for scale is going to be super important. I think that's really exciting for me.
BEN
Well thank you both for sharing your thoughts with us today. In the next episode, in this series we'll be taking a deeper dive into some of the unique ways computer vision is being applied across different industries to improve experiences and processes. I look forward to talking with you both again next week.
KEN
Thank you Ben.
[Music]