Gain Insights from Data You Already Have With IBM AI Solutions
Offering Manager Scott Soutter explains how AI offerings can help clients derive deeper value from data they already have.
By Jim Utsler03/02/2020
To some, creating and using AI models is like alchemy: the seemingly mystical process of transformation that takes place in dark, dank castle laboratories run by bearded wizards. The most widely known example of this is turning lead into gold.
In the modern world, AI does indeed involve transformation, with raw data (lead) being turned into intelligent, actionable and inferable information (gold) by data scientists and business analysts.
That’s the current state of AI, with tools being made available across the enterprise to help whomever in whatever department develop, deploy and refine AI models.
IBM AI solutions such as IBM Watson* Machine Learning Accelerator (WMLA), H2O Driverless AI and IBM PowerAI Vision—which has been recently augmented with the release of IBM Visual Inspector—require only goals and creativity.
Focusing AI Efforts
According to Scott Soutter, portfolio offering manager for Cognitive Systems Software, IBM’s early AI focus was primarily dedicated to delivering tools for data scientists or machine-learning engineers. Or as he puts it, “People who were trying from the bottom up to create their own environments in which they could develop and apply AI.”
To that end, a lot of IBM’s initial AI tool development work focused on being able to address complex challenges with large models by scaling up in memory or across system clusters, allowing many data scientists to work on a single CPU or hundreds of GPUs.
One example of this is WMLA: Its goal is to support multiple, concurrently working data scientists solving either the same or different problems using a clustered infrastructure. Because the solution scales, it allows separate teams to securely gain access to a single system so they can pool their investment to create a larger, more powerful infrastructure.
“Everybody benefits from this type of environment,” Soutter says. “If different departments are funding this cluster, you can borrow and lend resources between teams. So, when you need a little bit more, you can typically get it. The same holds with your colleagues, with them being able to leverage or borrow your unused or underutilized resources. This makes perfect sense from both large-scale AI modelling and fiscal perspectives, because development costs money until you have an actual, working inference tool, which is where the payoff comes in.”
Although WMLA was largely designed with enterprise data scientists in mind, IBM recognized another type of client base interested in AI: business-line buyers more interested in packaged or custom-built AI solutions they can quickly snap into their environments while still taking advantage of the power of WMLA.
As Soutter explains, “H2O Driverless AI is a tool aimed at folks wanting to attack machine learning on a more transparent and automated level, and PowerAI Vision is geared toward those who want to look at computer vision problems. Both of these solutions kind of converge around Watson Machine Learning Accelerator for the ability to execute machine learning at scale to address your problems more completely.”
People interested in infusing more automation into their machine-learning and some deep-learning explorations can use the H20 Driverless AI interface and guided development environment to craft AI models with a high degree of accuracy. The solution offers automated, nearly hands-off assistance in every step of the process.
The system manages this by applying the expertise of Kaggle Grandmasters from H2O.ai via a powerful UI. Extremely intuitive, the interface—and the “data scientists in a box,” as Soutter calls the grandmasters—provides individuals the relevant information they need to the build correct AI outcomes based on user goals and expectations.
“The value here is that you have these people almost working alongside you. You have all of their thought and their expertise built into the algorithms in such a way that problems are parsed inside of the software,” Soutter says. “For folks who are either early in their AI journey or want to rapidly build a prototype, that guidance can carry them almost through the entire AI development process.”
Simplicity for Visual Models
PowerAI Vision is similarly simple to use. Feed it a hundred images, tweak the results for the desired model outcome, and PowerAI Vision will automatically infer which of another thousand images will adhere to the model outcome. As new images are thrown into the mix, from whichever sources, it will then continue to refine the model, the original version of which can be created in as little as a few hours.
But as Soutter notes, even H2O Driverless and PowerAI Vision model development itself simply isn’t enough. “A lot of our clients had been approaching AI from the experimental side, and some of them, especially early adopters, have actually begun moving it into production,” he says. “But we’re still seeing a large group of people who are exploring AI. You can only do that for so long, however. Model development costs money. Eventually, they’ve got to start worrying about how they operationalize it—to get a return on their investment.”
That’s where practical tools such as the PowerAI Vision-aligned Visual Inspector comes into play. A native iOS/iPadOS mobile app, it’s been designed to enhance the capabilities of PowerAI Vision by enabling visual inspections and inferencing using mobile-device and mounted cameras.
Notably, the devices don’t have to be connected to a network and a server-hosted AI model in order to function. Instead, PowerAI Vision models are uploaded to the Visual Inspector app, depending on model design and purpose.
This allows for on-site real-time inferences across a variety of industries. For example, Visual Inspector can be used for large-scale industrial inspections with models that are meant to look specifically at refineries or civil infrastructure, such as bridges and roads. An insurance adjuster could use it to determine the severity of the damage to a car that’s been in an accident or, following a natural disaster, understand the scope of the problem, with AI categorizing and organizing relevant information.
Inferring can take place completely on the device, which is a boon to remote field workers. For instance, if a worker is inspecting telephone poles in a remote area with limited or no over-the-air access to back-end systems, they’re still able to use an accurate model asynchronously. When the user does receive connectivity, the information on the device will sync up with the larger network and be fed back into the model.
Visual Inspector also includes a dashboard that can show developers how models are performing across the organization. Additionally, supervisors can use the dashboard to manage the Visual Inspector environment. They can determine who has access to the dashboard, which devices are strictly able to run inference models or which devices are allowed to run inference models in an ad hoc manner. PowerAI Vision and Visual Inspector make a powerful duo.
Tools for Actionable Insights
IBM’s goal with all of its AI offerings is to make sure AI modeling and deployment provide a viable, transparent and real-world science that can indeed turn lead into gold.
“Our focus before had been on model training and model development speeds. We’ve really been in the middle of this cycle, but now we’re really looking at end to end. How do you make the process of gathering the data, deploying the model and make monitoring the accuracy of the model easier?” Soutter says. “So, the addition of the solutions we’ve been creating are really designed to make AI something that’s more accessible to organizations so they’re able to get value out of AI, instead of being stuck in an endless and costly research cycle.”
Jim Utsler, IBM Systems magazine senior writer, has been writing for IBM since the mid-1990s.
Post a Comment
Note: Comments are moderated and will not appear until approvedcomments powered by Disqus