IT leaders are leveraging artificial intelligence tools to gain clearer insight into business practices and ramp up digital transformation. The application of AI to reveal business opportunities and get ahead of the market is a competitive advantage IT leaders cannot ignore.
In this panel discussion, you’ll hear how top companies utilize AI to mine new business opportunities and monetize data.
You will also learn:
Note: This content originally appeared during Argyle’s AI/ML Technology Leadership Summit: Innovation, Transformation, Strategy on April 27, 2023.
Vicki Lynn Brunskill 0:00
Hello and welcome again to the Argyle AI/ML Technology Leadership Summit. My name is Vicki Lynn Brunskill with Argyle. It’s great to have everyone joining us today. Just a couple of notes before I turn things over to our panel moderator. First, a quick reminder to stop by our sponsors virtual booths at any time during today’s event, and for the following week. Our partners are committed to providing you with valuable content, and a great overall experience today. At any time during today’s event, you can visit their virtual booths from the main agenda page. And those do include complimentary materials information and meet and greet opportunities. To ask questions throughout the session, simply type into the Q&A chat and we will address your questions at the end of the session. And now without further delay, I’d like to introduce our moderator for this very, very interesting panel. Vatsala Sarathy, managing director of technology, finance and operations at Stanford University Graduate School of Business. We are so excited to have that Salah and our panelists with us for a discussion titled data driven business value, AI ml opportunities and challenges. Welcome, Vatsala over to you.
Vatsala Sarathy 1:10
Thank you so much for Killian for this wonderful introduction and for kicking off the session. Hello, everyone. Welcome to this session focusing on building business values using AI and ML technologies. I am so excited to be here with all of you and look forward to your thoughts and questions as we move along in this discussion. We are very fortunate to have a very diverse set of panelists today from whom I hope we will see a fascinating exchange of ideas. Before we dive into our topic, let’s do a quick round of introductions. And let’s start with you. Balaji. Tell us about your work in AI and ML. And, maybe give us one lesser-known fun fact about you.
Balaji Veeramani 1:56
Hey, thanks. Good morning, everyone. I hope everyone’s having a great day so far. Pleasure being here. So, I’m Balaji Veeramani. So, I am with Union Pacific currently, the director who has data or so better governance of snowflake heard on migration SQL Server and, I have I’m responsible of setting up MRI platform for Union Pacific. So that’s, that’s my role here. And the fun fact about me, I have a lot of fun facts, to be honest. And we just discussed about cricket. So, I keep one cricket. Great. Thank you.
Vatsala Sarathy 2:35
Thank you, Balaji. That’s fascinating to have cricket as a passion even after so many years that after you migrated over from India. Next week. I have Matthew Versaggi. Can you give us a quick introduction, Matthew? Yes.
Matt Versaggi 2:50
Good morning, everyone. Matt Versaggi here. And in the Fortune five healthcare space helped got a unique blend of business tech education entrepreneurial backgrounds, helped build number of their AI programs helped found the College of artificial intelligence and broaden and matured of both cognitive technologies built around the way the brain mechanics work as opposed to machine learning. And, quantum computing in that space. In mature that program, we’re going to use fun fact is on a polo coach, coach for 15 years, two continents, three states, four clubs and one high school produced a bunch of champions along the way. So, we were talking about that as well, during the downtime, back to Great.
Vatsala Sarathy 3:37
It’s indeed a variety of verticals, and areas that you have worked in so that we are so excited to have you here. Next let’s go to Martin Miller, Martin.
Martin Miller 3:48
Thank you for having me. I’m Martin Miller. 30 years of software development expertise, my passions go into driving solutions. I’m a builder. That’s what I do. Machine learning. I build it. I make it work I make it work at scale and hyperscale fun fact. Wow. I’ve been in a movie with Dev Patel. Yep. And it’s called the road within. You can see me in the last 45 seconds of the movie with my family and no money earned, but it was fun to be on set with them.
Vatsala Sarathy 4:26
Wait, I hope you get called into being in other movies as well. And finally, we have such a comment such it. Can you introduce yourself, please?
Sachit Kamat 4:37
Hey folks, I’m Sachit Kamat. I’m the Chief Product Officer at eightfold AI. So as the name suggests, we built AI solutions in the HR technology space and had been serving a lot of fortune 500 companies over the last few years. I have a background in this space with worked on at companies like LinkedIn and Uber, where basically I ran products that you know. We use AI to bring a lot of different kinds of experiences to consumers, right all the way from matching technology that was used in LinkedIn, to match jobseekers, to opportunities. And then at Uber was about payments and effectively using AI for making a lot of decisions about transactions. In terms of a fun fact, if I wasn’t doing this, it will probably be becoming a sommelier. Although what I’ve just heard is that Chad GPD has just passed this sommelier exam. So, I think I’m a step behind right now.
Vatsala Sarathy 5:30
I think you are, and we can talk more about that as well. What a wonderful mix of panelists, we have. Thank you all for being with us today. And sharing your experience is with our audience. So, becoming commonplace in the business world, at least in terms of awareness and discussions. However, we don’t hear very often how companies go about integrating AI and ML technologies into their business functions. We don’t hear about how they started, how they built strategy, what functions were the first to adopt these tools, how they learn involved, and equally importantly, how they use data to mark progress and measure success. So, these are some of the areas I hope we can cover with our panelists today. So, let me start with you, such as, in your experience to companies integrate AI across the board broadly? Or do they do it and one function at a time and then move on to other areas? Can you give us some practical examples of what you have seen different companies do?
Sachit Kamat 6:35
Yeah, what I would say is that we’re very much at the infancy of the use of AI in within large corporations, right? Because effectively when you think of AI, many companies basically go down to automation, and what are the tasks that potentially machines can do better than people. So just to give you an example, some of our customers today, you know, effectively are using the AI to sort of automate the screening process when it comes to recruiting, right. So, if you can think about it, for a human to go through millions of resumes, and sort of determine who are the right fit for a particular role would take a lifetime. And those are the kinds of things that where AI can come to the fore and effectively do that operation in a matter of seconds. Right. And those are the areas where we’ve seen adoption, sort of at the leading edge right of utilizing AI for practical applications, that just would be impractical for humans. On the other side of it, there are many things now that you can envision, right, where AI could be more useful, you know, in terms of some of some of the new advancements that have happened with large language models. But I would say that most companies are very much at the, you know, sort of, at the starting point of figuring out how to incorporate these technologies within their companies. And it’s an exciting time to be in the space to be honest. Yeah.
Vatsala Sarathy 7:50
And it’s still being in an infancy stage, I think is great, because, you know, you can explore, and you can experiment with it. Matthew, what were some of the best and worst AI decisions? You have seen me by organizations?
Matt Versaggi 8:05
Ah, well, it depends. There are many. It depends on where they are in their journey, right. So, you’ve got a startup that’s going to use AI to build a business round, you have a mid-tier company that’s gotten through their beginning stages, and now they’re starting to scale. And then you’ve got companies that have gone from that to where they’re really going further in the digital journey. And they’re really incorporating AI everywhere, as a general-purpose technology. So, each one of those can make good and bad decisions on any one of those stages. So early stage, you know, early stages tend to be a bit more myopic, like AI is going to cure all the world’s evils and concerns. And they forget, you know, that business value must be established. And they think that AI is the business value. And it’s not like, you don’t care what’s on your phone, you just care that it works. So that’s an issue there. In the mid-tier, when they’re starting to scale, one of the things we found is that it must catch up to AI. AI comes in and starts doing wonderful things just to let’s just look at the machine learning space. But then the infrastructure is really got to come up around that to meet that. And those must be paired. Now. That’s a finding you see when you go down that road. And then on the more mature side, when they’re beginning to wrap AI in terms of their total business journey. You’ve got your core aspect of AI, that your core product and business you’re in, right. And that an AI can undergird that, but then it permeates throughout the rest of the organization, HR, finance, marketing, all these areas start to use AI, but at different rates and different clock speeds. And so those bad decisions come in so many different places, either your primary aspect or some of the smaller, smaller things. So, back to you.
Unknown Speaker 9:58
Yeah, anything you want to add I’m Martin.
Martin Miller 10:00
So, I concur with the panelists here. And, you know, there’s a learning curve. But I also equate we’re still in the early phase, even though it’s been going for 20 plus years of implementation of machine learning technologies. And the tooling is all over the place, you have several vendors that go full cycle, you know, from the tooling point of view, you have customers that are just new to the journey picking up these tools, it’s like the first time they picked up a screwdriver, or a hammer, and everything looks like a nail when it’s a hammer. And in the problem with that kind of thinking is, then you will lean off into using your hammer to hammer at nails that aren’t there. And the reason I point this out is there are some solutions, you can step back and say, well, that’s maybe a better robotic process automation versus an ML machine learning process. So, and then the other puzzle piece to bring in here is probably how, you know coming back to business value, your business stakeholders understand these machines. Alright, self-sustaining, they must be trained or retrained on some curiosity or based on triggering events or new information. And that is a tough one to chew on. For people, I think, Oh, I just thought it’s just like a black box. And it’s fully contained. That that’s where I’m going to drop in and get back.
Vatsala Sarathy 11:19
Yeah, yeah, great points. There is a question that just came in, that might be an interesting question to address at this time. AI has a lot of applications like resume screening, but there is also an argument that it cannot think outside the box biology wants to take this on and tell us what do you think about this?
Balaji Veeramani 11:39
Hey, that’s a really wonderful question, I need to acknowledge it right. So, so, the machine learning concept itself there from 1950 When Alan Turing came in and he wrote that book called Computing Machinery and Intelligence right. So, today, whatever it is, we do, there is something called Turing test we perform so whether AI can get into a level of rational thinking like a human so, so we have a supervised training, which is like just predicts, we get with the parameters, like if you guys are quite familiar with the or concepts like so. So, we just give the parameters see is it okay, if this parameter is going through, then it’s supervised learning through the huge amount of data. And there is another place, which is what today’s generative AI is all about the cutting-edge technology, like 1950, through here, like 70 years, we had not, not this much of nice in last day. So, what’s making it so interesting is today, we have unsupervised training. So, I the model can train itself over the period again, and again and again, and again, it can fetch the data. Everyone knows Chad GPT for now, but many may not know, the bar from Google. So, if you guys can hustle with it, you can see the difference. The bar cannot kind of communicate in a way it, it kind of like how it communicates to a human but charge up to a level it can. And this is an evolving, this is an evolving technology. We may get there. We may we may our view, we are evolving, and we would get there. So, it’s like a scoring. We are currently at 90, we may get to 90 fi we may get to 999. And on point. To answer to that question very specifically. I want to start learning by itself, it can start thinking out of the box, and we are in that journey.
Vatsala Sarathy 13:37
You’re not there, but it is possible is kind of what you’re saying. Awesome. Yes. Great. Now let’s talk about the role of data itself in fueling innovation within AI. So, I’ll start with you, Martin, can you talk about the importance of data strategies and leveraging AI technology, especially maybe training data for the algorithms and
Unknown Speaker 14:01
so on? Sure.
Martin Miller 14:04
Let me let me start with, you know, there’s there could be a volume of data sources that build are built into a solution of any type. And those data sources could be streamed, they could be batched, and then they’re used for training per se. And within that realm, you could have a lapse of data elements missing. And so, when you’re using systems of that nature, where they’re different timelines of where the data flows in, you’re going to have differing usage of how you can train it, the efficacy of an inference and so you need to focus on your data tooling very heavily for automation, you need to have failsafe mechanisms in place what to do. When a workflow crashes. How do you recover it auto recover it because if you have an inference on bad data, you got to make the Wall Street Journal for reasons you don’t want to make it? And my goal is to not do that. And so, I just want to point that out data is your gold, don’t slip on
Vatsala Sarathy 15:10
it. Yeah, absolutely. could not agree more such as, can you give us some examples of how data helped you or your organization in any role that you’ve been in, in changing your strategy or course correcting? And what kind of business models did you build and work with, during this time,
Sachit Kamat 15:31
let me give you a, like a very specific example from my time at Uber, right. So, which is a company obviously, that tries to move people from point A to point B, during the pandemic, effectively, in about a week, what happened is that people stopped moving, right. And now this company had to effectively figure out how to, you know, move its people towards like solving the problems that needed to be solved. And effectively, like the challenge that it faced at the time was that it didn’t have enough information and data about what people were capable of doing. So, even though the Uber Eats business was trying to grow like insanely fast, it couldn’t figure out how to navigate the people movement within the company. And so that’s a great example of like a data problem, which is, you know, not understanding employee capabilities, employee skills, effectively impacted the company in a in a negative fashion during those times. And, you know, you could read about sort of what happened, it’s out there in the public, sort of, in terms of layoffs and other things that needed to happen. And then this correction of like trying to fuel the new business, right? Interestingly, that’s the problem that my current company solves, right. And so, you know, we have, we have started to do that, which is like used data and AI to sort of help companies navigate these sorts of changes at scale. And a lot of effectively using this technology now in in Britain, in practice, for making the sorts of moves that have happened, especially as these economic cycles just keep getting much more aggressive and much more impactful to your business, I think what we have seen is that there has been this movement towards in skills as the currency.
Vatsala Sarathy 17:08
Yeah, they’re not only more impactful, but these changes are also happening at a faster rate. And we’ll talk about that as well. So great point. Such it. So, I want to ask a question that came into Martin down the road, we may find that the ability to effectively use the old-fashioned search engine, isn’t that after all, what are your thoughts about 10?
Martin Miller 17:31
Absolutely. So, I find myself, you know, looking at search engines, and you know, going out there without naming, you know, the top two, three vendors out there, and you know, you come back with a lot of sponsored searches. And alternatively, I’ll go into chat GPT, or something like GPT. And I’ll ask the same question. And I’ll get a not a less bias on promotional reasons, return. Now, there’s some recency challenges on chat GPT on the dataset, but if my question isn’t about something in a current news article, or current event, or where the data is current, I likely the impact of this generative technology applied back into search engines, and I would bite my tongue, but I’ll pay for that service to not have sponsored placement. That makes sense.
Vatsala Sarathy 18:21
Yeah. And that also leads me to think that the way people think about marketing is also going to evolve rapidly. Matthew, to talk to us a little bit about, you know, we start off with data, but there are different kinds of data. And the data can evolve into learning, deep learning, and ultimately what return as to knowledge. But, you know, these are fuzzy terms, even for me, so how should leaders think about these terms? And what are the differences between these terms?
Matt Versaggi 18:50
Yeah, awesome question, because there’s a lot to unpack in there. So, when you look at that data, you’re really thinking of AI as machine is the data-oriented AI, machine learning deep learning GPT? You know, NLP visual, right? That’s actually a very small part of AI. I mean, it’s what we commonly think of AI. But if you I mean, I started my journey back in the 80s. So, are my aperture for AI is much larger, when you begin to move away from machine learning and its ilk, right? Which have very hard limits. Like it cannot reason dynamically, it cannot learn on its own. It cannot do a lot of stuff. What it can do does extremely well. But it won’t be the core engine of a robot walks around your house, cooks your dinner, cleans your house and walks your dog. Right. But when you get into cognitive technology that’s built around the mechanics of the human brain like cognitive architectures, the SOAR cognitive architecture Act are a or many of these things that are basis for doing cognitive research under duress and accident, malformation, things like that. Those are the engines that power are military drones that work on their own that can look at an environment that can change its mind that can learn dynamically autonomously. And what that does not learn off a data that doesn’t just learn off its experience in the outside world, either. It needs structured knowledge to be seated with of the domain that’s going to operate in. And we did this in the healthcare space. And then it needs to know how to reason and think over that domain. Now, that’s an entirely different aspect of AI. And, and that’s kind of moving more towards that AGI space at an engineering level. So, when you start looking at these new kinds of AI that are going to morph together over time, and that’s what we’ll see. You’ll see data moving into structured knowledge, you’ll see structured knowledge moving into another representation of its engagement in its outside world. Many self-driving cars are trained in 3d mesh modeling environments, it learns how to avoid things in those kinds of volumes. So, you’ll see all that change over time as these technologies merge together.
Vatsala Sarathy 21:05
Yeah, yeah. Awesome. Balaji. I’m very curious to hear about data culture within organizations, and data curiosity, how do we build this? Is it a top-down approach? Or a bottom-up approach? And how do we even hire and retain people who have high data curious how to how do we even identify such people and build that data orientation as a culture within the organization?
Balaji Veeramani 21:38
Hey, such a great question. So, this, what you spoke about is a real time problem, right? So, in 2030, we are expecting around 1.2 billion to upskill. And COVID is using a Facebook, LinkedIn, who I’m sure you guys have been hit up with the ad, or data, we need to use it as a product, or you need to move your career to data. So, that’s a buzzword across, right and for a reason, this forum is happening on this. Every organization, while we talk now, across the globe, Dyno data is the key cornerstone. So, if data is not there, if you don’t organize the data, in a way it is, we don’t have, we can’t even talk about ML and AI. So that’s our fundamental data is fundamental. It is, it’s, it’s your fewer, or whatever we want to do on this MDI space. So, this is most of the leadership, it’s top-down approach, and to extent bottom up. So, I’ll tell you why it is top down. The one leader most of the leaders understand it, right. So, all our all our leaders get this. Now, getting that message to the bottom and bottom of our resources are low level resources upskilling them to the new technology, for example, where people working on SQL Server have too much data. So, they need to upskill what are the new cloud technologies? What how do we kind of stream that? How do we kind of influence this data? How do we kind of curate this data for the next level, which will make a path breaking mi use cases? So, it’s both ways but it starts from top down to start with I think top have already reached there but a COVID is listening to it a high time get your get your boots ready on traveling towards the upscaling to slow down? Yeah, so I’d say
Vatsala Sarathy 23:32
yeah, I’m such it. Talk to us about skill, obsolescence, and new competencies that we must think strategically about when we train our team as we embark on these projects.
Sachit Kamat 23:47
Yeah, so I think like a lot of corporations right now are starting to think about how they can become more of a skills-based organization, right and effectively, like, in the sort of the previous world that many companies would hire based on roles and people, right. And they would say that, in sort of the planning process that typically happens on an annual basis at most companies, you make a judgment of here’s kind of the budget that we have, and here’s our here’s our maps of the actual people. And I think what a lot of corporations have realized is that that is really sub optimizing the entire problem, right, which is, effectively that really ignores what do these people do? And what are the skills that are needed for that company to be successful? And so, a lot of corporations are now moving towards this world of thinking about skills as the unit, which is, you know, what do I need to achieve the business goals that I’ve set for over the course of the next year, the next few years? And how do I effectively hire the right people that have those skill sets? I think you talked about sort of data as a skill, right? How what does that map into sort of the long-term goals that a corporation has, and then effectively making the plans towards creating the right skill sets that can achieve those As plans, that is where the industry is headed now. And I think that kind of transformation is necessary, right? Because I know digital transformation has been one of those buzzwords that that a lot of leaders throw around. But every time that you take something that’s a manual process, and you turn it into something that is automated, or with technology that effectively becomes a piece of technology, it starts to emit data. And it starts to add more complexity. So, you saw one problem, and then you realize that, you know, there’s a whole other set of problems that you need to solve. Yeah, and that is kind of the interesting thing about, you know, the spaces and technologies, right, which is, there’s always going to be an infinite set of problems to solve. And, you know, I see the skills required to solve these problems continuing to evolve. So, in many ways, I think that’s an exciting time to be in the space.
Vatsala Sarathy 25:47
Yeah, for sure. Matthew, because of the fast rate of change happening. In this space, people are constantly seeming to have to rescale and upskill. And that can cause fatigue, fear, or even uncertainty, how can leaders help their workforce deal with this?
Matt Versaggi 26:09
Perfect question, one of the key aspects in 2023, is we’re going to see an accelerated acceleration in terms of, of tech, and in business, things like that, the net of that is that us, we’re used to having our skills become obsolete in about the 80% mark of our tenure at work. Because of this accelerating acceleration, that’s now being pushed back into the first third. So, you look at your tenure as a working adult, from fresh out of school to the time you pull the plug on retirement, your expiration date on your skills is really going to happen at about the first 20% of the first third of your career. And you’re going to have to reinvent that. And then you’re going to have to reinvent that again. So, one of the phenomena that we’re seeing out there because of the acceleration of tech, and, and global coordination, as well, is that there’s the upscaling capabilities and the reinvention capabilities that that need to happen, because your job has changed so quickly, that you need to change with it are going to be a vital part of staying relevant. So, in a grand scheme, as that happens, what we tend to do, what tends to happen is we create a useless class, things change so fast, and entire class becomes useless to the government or to the to the corporate and political system. And unless they can rescale quickly, they’re going to get pushed to the fringe. So, this is a very serious issue of how we solve that problem, but it’s due to the quickness and acceleration out there. Yeah,
Vatsala Sarathy 28:00
right. Moving on to another very important topic, related to AI and machine learning is the risks related to embarking on this journey. So, Martin, maybe you can start off talking about biases and hypes you know, we know there are so many different biases, and they start off they go all the way from data source to decision making, and you know, anything in between. So, talk to us about how we should be thinking about these and working to avoid some of
Martin Miller 28:38
that. Sure, that is a that is a hot topic and a hot area. Let’s break it apart as what are you trying to accomplish with your solution and having a measurable KPI in some small sense, and assume your data isn’t perfect? That should start there. And what are you going to do with the anomalies in the data? How are you going to handle anomalous decision making? And then, you know, step back a moment and be realistic? Am I looking to produce a 50% lift in my corporate revenue based on the solution? That may not be a practical expectation? So put your expectations in alignment to where you are?
Unknown Speaker 29:23
Yeah. Matthew, anything to add.
Matt Versaggi 29:26
Um, yes. So, this is one of the big realizations that we’ve looked at. A lot of futurists are kind of looking at this as well as a kind of fallout from what they do, but the reliance on modern day media, as you look at who owns who, and the consolidation all the way up to these two or three global governing bodies, at least in the US, there’s three, like all media flows up through there, it has ceased to become an area of true news. And it has embarked in an area of propaganda and influence. And so, what we look at as technical news articles and news journals, journals tend to be more of influencing rather than informing. And so, when you look at the hype cycle that gets promoted around that, we can no longer look at media as an unbiased source, we must put on our filters of what, what are they pushing? That’ll go a long way. But we must make that realization that it’s not, it’s not a good, it’s a tainted source of information.
Vatsala Sarathy 30:32
Awesome. Another risk, I think, related to AI and machine learning is, you know, when systems or algorithms don’t function as expected, or even become irrelevant or counterproductive. So, I want to ask both such and then Balaji, how would you think about situations like this? And how do we build guardrails from the get-go so that, you know, these things are going to happen? At some point, we are going to realize the learning algorithm is not working as expected, or things have changed outside of its scope, and it doesn’t, you know, produce results, what do we do to build systems that can address these or at least flag these things as they come up?
Sachit Kamat 31:19
Maybe just to also, like touch upon the bias point, right? In the previous conversation. And linked to this, the, the way to think about it is right, like much one of the like flaws of machine learning is that it can sometimes just mimic you know, effectively the decisions that have been made by humans, right? When you look at the data and you train against it. And sometimes that that bias is prevalent in those decisions in decisions that have been previously made, then you can potentially use that as a tool to amplify that bias, right with machine learning. And that’s the kind of stuff that where you need controls in place, right. So a simple example that that I like to give here is that, you know, let’s just say you were trying to hire a particular candidate, and you happen to mention in the description that golf is something that would be a plus, right now, you would basically if you start to build machine learning algorithms that are unsupervised, in this particular domain, what you will end up with is a situation which will, you know, greatly favor men as candidates versus women. And that’s another bias that you don’t want to inject at scale. Because, you know, the damage that can be inflicted by humans, obviously, is much harder scale relative to what machines can do. So that I’m very examples where you must have the right controls, you must have the right testing. And you must make sure that you’re not scaling bias. So though, that’s a, you know, concrete example of sort of where there needs to be a balance here.
Unknown Speaker 32:42
Biology anything to add.
Balaji Veeramani 32:44
Absolutely. So, any modern new technologies Godrej is a very important it is tested, and particularly in this space, right? Because just imagine we are giving control to the machine to some extent. So, I’m going to give you a very real an intriguing exam, right. So, we have derailments. So, there are a lot of railway railroads, we want to solve the dilemma. We can use machine learning for that form. Right? So how do we do it and we train the models and make sure it is having the latest the latest and greatest data, and supervised training. So, we start with the supervised training, and then get a model whether it’s working with, so you do the Turing test and whatnot, and then slowly move to unsupervised learning, see how the model is performing. So, it’s not like oh, yeah, we got the idea. We have boom, we’re getting into prod now. So, there is a rigor and rigorous testing process, because we are playing with life. Right? So, there are there are there are critical decisions made based on AI, believe it or not. And so that that goes to our mental model to go and approve that solution. And keep updating it. That’s why this question is very important, right? So, once we develop the model, it’s not we are done with that, this is quite evolving, and in the current world, so for example, the self-driving cars we are playing with life. We are so the amount of data, which is getting around, it cannot mess up. So, there is a frequent learning. That’s where this current era I believe, even though from 1950s, we have this machine learning this era of deep learning, and generative is because of unsupervised. The evolution we made on unsupervised learning and models can teach themselves based on the datasets they have today, live using live inferencing as part braking. And having said that, the checks and balances and maintaining the Ops is as important as we go in in such a way as we are and so
Vatsala Sarathy 34:38
yeah, yeah, absolutely. I just love how the example showed us what we can, you know, anticipate with if you don’t have these controls in place, and settings such as mentioning so there’s a good question related to this that talks about feeding AI algorithms with the right quality and volume data? What happens if we don’t have access to it? Or the quantity does not yet exist? How can you address this as a bias problem and use representative and high-quality data? Anyone?
Unknown Speaker 35:16
So that’s a good question. So,
Balaji Veeramani 35:18
so, this is the business problem. So, we need to think so rationally, right? So, we need to understand what problem you’re trying to solve? Are you trying to solve a problem, or you’re trying to get into the bandwagon of AI? Yeah, if you’re trying to get into the bandwagon of AI, you don’t have you know, you’re not having enough data to go and execute a probably, we need to ask these questions to ourselves, you’re not ready yet. I can keep repeating about it. So, without proper a clean data is the fundamental and cornerstone for all this, if you don’t have that, if you have no confidence about your data, probably got to step back work on your basics, right? How to get this, how to curate this data for me and for my organization. So that’s, that’s your fundamental check.
Vatsala Sarathy 36:05
Yeah, Mac, you talk about what Balaji just said about curating data. So
Matt Versaggi 36:12
I mean, he’s right, data is king, in the machine learning deep, deep learning world, you must have that. The assumption, though, is that you must get it from the outside world, like you must go and observe some phenomena in the wild that you’re interested in, grab all the data, you can and then train an algorithm on that. That’s not true. And so, you start seeing, and that’s an assumption that will work most of the time, but, but you can start developing synthetic data, to train an algorithm with the way you want that algorithm to behave. So, if you take a step back and abstracted when you look at the wild, and you gather a bunch of data, you’re trying to model the behavior of that, that phenomenon, the wild, whereas now they’re moving from well, if I can’t get it, how can I know how it’s supposed to behave? How about if I just artificially create it in some fashion that’s as rich and as dense as I would find in the wild, but covers all the use cases, and eliminates all the concern bias that we have. And that’s only in certain external facing areas. So, you’ll see a movement towards synthetic data to help solve that problem.
Vatsala Sarathy 37:18
Back to you. Yeah. And now we are coming up on time. And there are a couple of good questions. But before we look at some of those, one last area that I want to address, as there has been so we all know that the adoption of AI is not the same across all industries, all companies on functions, there are delayed options within certain core business processes. So, let’s talk a little bit about that. And why is that? And what can we do? So, let’s start with your setup.
Sachit Kamat 37:50
Yeah, so I mean, this inertia that exists in corporations is very real, right? Even for applications that are somewhat mature, you still must sort of make the very core business case about, you know, is it a cost savings argument? Or is it you know, a revenue growth argument, right, from a from an internal perspective? And then what we do find is, at times, there are real conversations that need to be had about if there is automation that is going to enter the space? Like, what are the people that are effectively doing those jobs? How are they going to be transitioned to other things within the organization? And these are like hard-core things that need to be solved within any company, right? So, what I would say is that, you know, until many of these corporations see the clear value, the clear business case, and then effectively understand how to manage the change within the organization, it is still going to be a slow roll. For the most part, regardless of whether it’s a mature application, it’s a completely brand-new application. But at this time, what I would say is that it’s becoming much clearer what the value is in particularly like certain roles, right, which are going to change very, very materially. An example is like the, the role of a copywriter is going to fundamentally change if it hasn’t already, given the generative technology that is out there now. So, these are the kinds of things that I think a lot of companies are grappling with. And it is a very real struggle, right to make the shift towards adopting these technologies.
Vatsala Sarathy 39:14
Yeah. And it’s also quite related with the upskilling rescaling, the relationship between man and machine is also changing and how we think about that. So that’s a very interesting topic as well. And when different functions adopt these processes at different rates, how do you bring together teams’ roles and responsibilities or even build organizational structure? Anyone?
Matt Versaggi 39:40
So let me take a stab at that. And then I’m going to punt over to Bali, Bali. So, they’re the adoption rate. And let’s be clear, is abysmal in big orgs with AI and there’s various reasons for it. But it’s not nearly as far along as we had anticipated. But as you begin those, your journey along those lines, the three things that were of issue that at least the blame was data, infrastructure and talent. And we have solved all those problems. And the last was, as we began to teach AI to managerial staff, our x their own, that’s when we found one of the real impediments to it. Now, as companies start building AI into their core business, that’s one aspect of it. That’s what they’re going to try to do first, but then they’re going to note that AI can really play a magnificent role, considering as a general-purpose technology, it should invest everything in all the other supporting areas that business as well, like HR, marketing, finance, you name it. So, they’ll start seeing a different clock speeds in how AI gets adopted in the core business itself a thing you do for business, and then those supporting departments that really kind of make things run. So, and then we’ll punch over. So, I’ll pass the baton.
Unknown Speaker 41:17
Great. Anyone else? There are a couple of other questions I want to get to as well. Okay, so
Vatsala Sarathy 41:25
there is an interesting question related to the hype, and the governance and so on, is government regulation going to make any significant difference? Or is the cat already out of the bag?
Sachit Kamat 41:38
Yeah, so maybe I can jump in here, right. Now, the best way to put it is I think that the rules of the road are still being written, right, in many ways, in terms of what is, you know, good for society, and sort of, you know, in terms of how to bring some of these applications in a way that it doesn’t hurt, you know, economies? And that sort of, you know, thinking is going on right now within the regulatory space. Right? Yeah, I think you’re Europe is a bit ahead on some of this now, right, in terms of thinking through this. And I know, there’s a European AI regulation that is in the works, that, that they’ve been working on. But I know that there are certain jurisdictions within the US as well, that have jumped ahead, a great example, this is the New York, you know, government has basically jumped into the employment in AI space. And I started writing some of the regulation around it. And I think it is forcing people to really think through what the implications of this technology on society are. So, I think the regulators are a bit behind now, and in many ways, they’re playing catch up with the speed at which the technology is being developed. But that is not uncommon, just given how technology is evolving very quickly.
Vatsala Sarathy 42:48
Yeah. There is a question that is very platform related. And I’m not sure if it’s specifically for one of you to enable human intervention into any AI, how is this possible through your platform? Need for modification by human to the existing algorithms? Yeah, maybe?
Balaji Veeramani 43:10
Yeah, I mean, since I’m from the platform, so that there is a lot of so human brain writes its own theories on AI. It’s like a UB imagines, it teaches by itself, we don’t have any contractual, we have a lot of control. So today, whatever in production, if you need to deploy a model or machine learning model, it’s deployed by him. And we have Mr. Ops engineer like any other off, so they we get that, we get that we have deployment techniques within the platform, which means which gives us a control. So as any other for example, like UPS saves like 400 million, just by having the machine learning and then understood deploy a model to just to make sure the routes are very, very, right. So, they say 400 million, and they want to change it, they can change it. So human intervention is always there. And it’s part of the deployment technique we all always have. So, we have, we have the repositories of model, and then we also score our models performance. And then in the unsupervised learning, we also score the real world using our Turing test, whatever we have, and then we decide whether to promote the new model into prod as are now and then that’s that decision has been made. And we have guardrails on that so if I have answered that question, purely on the platform perspective,
Vicki Lynn Brunskill 44:29
very good. I think that pazza has dropped off for a moment. So, Martin, you had 10 seconds data then said cheetah think you had 10 seconds as well.
Martin Miller 44:36
Let me let me just add, add to what was just said, you know, think of observability and explain ability, as a methodology to help answer the question of, you know, interacting with the model and adjusting it without a dashboard of some sort to have metrics and KPIs. You have nothing to interact with. So, I’ll stop there because that’s I went beyond 10 seconds.
Sachit Kamat 45:00
Yeah, I’ll just add that the ideal experience is one where humans do have a way to interact with the model and effectively change some of the outputs that the model is, is behaving right and effectively, the right software allows you to do that in terms of tuning those inputs. Perfect. Thank
Vicki Lynn Brunskill 45:17
you. Thank you, everyone. This has been such a good panel. I wish we could go on longer went too fast. I wanted to thank bots a lot and Sachi Martin, Matthew and Balaji for an excellent session. I also want to thank everyone for joining us today for the session. This session along with all of today’s content will be available on demand following the event. Thank you all.
Unknown Speaker 45:37
Thank you. Bye, everyone. Bye. Bye. Thank you.