Lawrence Ng
Jan 27 min
Eunice Olsen:
Agnes Heftberger:
Basically, our vision is to help enterprises, organisations, and institutions innovate with technology and solve some of their most pressing challenges, whether they are business challenges or societal challenges. And we believe in two technologies that are reshaping the world we live in today.The first is hybrid cloud, and the second is AI.
And, when we work together, we now have opportunities at our fingertips that make things possible that we haven't had in a long time. So this is what the IBM strategy is all about right now: bringing hybrid cloud and AI to clients and organisations all over the world in a way that allows them to consume it, use it safely, and in a transparent manner, and really put it to the best use possible.
This is what AI means to us.
Eunice Olsen:
Agnes Heftberger:
Foundation models are what basically give you know, AI, a completely different, possibility right now.
With the advent of foundation models, we now have the ability to put AI use cases into action much faster than with previous generations of AI. It's really exciting to see the various use cases, as well as the various types of models, that are available, and the NASA example is a beautiful one.
I think from two perspectives. First one is, I've been talking about using technology to solve some of the most difficult problems we're facing. One is unquestionably sustainability. We recently polled CEOs all over the world, and nearly 70 percent of them said, yes, we want to use AI for sustainability. So this is a real-world application that enables us to do so. It's also a good example of how the concept of foundation models, generative AI, and not just about large language models; which there are different types of models.
This is a geospatial model that we created in collaboration with NASA, which has a massive amount of satellite or sensor data. We've trained that model together, so we can see, for example, how land use is changing and where carbon is being stored. How is the population, you know, moving around? We can put it to use in agriculture.
It enables the use of massive amounts of data that were previously inaccessible due to technological limitations.
And now, with AI on that foundation model, we can tap into the vast amount of geophysical data that is available.
It basically advance our recognition of what we need to do in order to be more sustainable, but also using it from a business standpoint, for example, in the agricultural or consumer space and there are more that we can do with it.
I believe you asked me about one of my favourite use cases, this is it. There are so many details we learn about just the land that surrounds us, and there's so much data that we can use to make better decisions, both in terms of sustainability and business. The best part is that it is not mutually exclusive.
Eunice Olsen:
Agnes Heftberger:
If we just look at a couple of examples that we see a lot of focus on in the organisations that we work with, they all fall into this category of trying to basically automate and, you know, get rid of workflows, processes that are very tedious or very complicated.
Eunice Olsen:
Agnes Heftberger:
First and foremost, I believe it is critical that we discuss responsible and ethical AI, and I am grateful that you have brought this up, as well as that BMW has placed such emphasis and importance on this, and we've been talking about this for a long time. It's really encouraging to see that there's a lot of awareness, you know, rising, and people are really spending time understanding what it means, and if decision-makers really spend time understanding how they can actually put it into action, and I believe both types of actions are important. Having a framework for what it means to have responsible AI and what responsible AI is, what the principles that we should adhere to is important, as is the other element, how do we actually design AI that is responsible by nature. And you can't just have one.
In IBM have five principles, not seven, but they are extremely important to us. It means that the outcome of AI must be explainable. So explainability is essential. Transparency is also essential. If you're interacting with a piece of technology as a human, you should be aware that AI is at work, and you should be able to rely on the outcome despite the vastness. We've heard and talked a lot about hallucinations issues, the robustness, the privacy, and also how the data that's being used to train AI, but workflow also need to be super secure. That is a major concern shared by many CEOs on how they can ensure that privacy is maintained.
The other issue is fairness. We've talked a lot about how to ensure that the outcome of an AI decision is fair. For example, in the banking space, whether someone gets a loan or not, how do you ensure that there is fairness?
Eunice Olsen:
Agnes Heftberger:
The lack of bias or the need for fairness, needs to be handled through both principles that you have as a company that either develops or uses this type of technology, or both, but it also needs to be built into the technology. This is a key element that we want to ensure is done by us as well, that we not only have a great representation of different, parts of society when we develop the technology, but that we also provide the toolkit that allows companies to test. Is there a problem with our model? Do we have prejudice? We need to look at it like a life cycle management overall, and we actively look for any problems with the fairness principle in the technology itself. I believe it is extremely important to understand the principles of technology. On the company side, you need to have diverse teams to actually work on the technology, so that you can test for any bias and drift issues.
Eunice Olsen:
Agnes Heftberger:
In IBM use it for very precise regulation and we refer to it as precision regulation. AI is and will be used in a wide range of applications, each with its own set of risks. Here is an example. Is there really a big risk if you use an AI-powered chat bot that recommends which restaurant to go to? Most likely not. Are you dealing with regulations? Most likely, not much. But dealing with company employees or, using AI in healthcare, medical environments, driverless cars, there is definitely risk involved, and there should be regulation. Being precise in the regulation in line with the risk that we actually have from AI, I believe that's really important to have the balance.
The use of AI must increase people's trust, but it must also be used in novel ways that do not stifle innovation.
Eunice Olsen:
Agnes Heftberger:
First and foremost, I believe it is incumbent on the technology industry to build trust with individuals, particularly owners of small and medium-sized businesses. We can see a lot of advantages in this technology. I mean, you can almost feel it and probably sense from my excitement. But it will only happen if people and society as a whole have trust in this technology. Obviously, it is up to us in the technology industry, as well as the government and everyone else, to build that trust.
It's also important that we learn and understand how AI is changing the world. Everyone, regardless of what role you're in or what role you play in your life, should read up on AI, be educated, and build the skills for themselves, so that we can understand the potential pitfalls or the opportunities, and what we are dealing with. I believe that there are opportunities for everyone, not just university students, to tap into the possibilities and responsibilities that come with AI.
Content in partnership with BMW Group Asia