top of page
  • Lawrence Ng

The Possibilities and Responsibilities of Using AI: Agnes Heftberger, IBM

Agnes Heftberger, IBM's General Manager and Technology Leader, ASEANZK, spoke with tech360tv's Eunice Olsen about the possibilities and responsibilities of AI. They talked about how AI can help businesses and the environment, but they also emphasised the need for ethics, trust, openness, and skill development so that everyone can benefit from this technology.



Eunice Olsen:

IBM has made significant investments in AI and considers it an integral part of its core DNA, and all of you have been diligently working on this project for an extensive duration. We are all familiar with Watson, and now there is a new version called Watson X. Could you please explain IBM's AI vision?


Agnes Heftberger:

Basically, our vision is to help enterprises, organisations, and institutions innovate with technology and solve some of their most pressing challenges, whether they are business challenges or societal challenges. And we believe in two technologies that are reshaping the world we live in today.The first is hybrid cloud, and the second is AI. 

And, when we work together, we now have opportunities at our fingertips that make things possible that we haven't had in a long time. So this is what the IBM strategy is all about right now: bringing hybrid cloud and AI to clients and organisations all over the world in a way that allows them to consume it, use it safely, and in a transparent manner, and really put it to the best use possible.

This is what AI means to us.


Eunice Olsen:

Now I would like to discuss foundation models briefly, since that is presumably what you are doing and assisting your partners with. You also have an incredible partnership with NASA; you are utilising satellite imagery to examine the most severe consequences of climate change and environmental transformations; this will revolutionise the way in which we approach sustainability problems. Could you share more on that?


Agnes Heftberger: Foundation models are what basically give you know, AI, a completely different, possibility right now.

With the advent of foundation models, we now have the ability to put AI use cases into action much faster than with previous generations of AI. It's really exciting to see the various use cases, as well as the various types of models, that are available, and the NASA example is a beautiful one.

I think from two perspectives. First one is, I've been talking about using technology to solve some of the most difficult problems we're facing. One is unquestionably sustainability. We recently polled CEOs all over the world, and nearly 70 percent of them said, yes, we want to use AI for sustainability. So this is a real-world application that enables us to do so. It's also a good example of how the concept of foundation models, generative AI, and not just about large language models; which there are different types of models.

This is a geospatial model that we created in collaboration with NASA, which has a massive amount of satellite or sensor data. We've trained that model together, so we can see, for example, how land use is changing and where carbon is being stored. How is the population, you know, moving around? We can put it to use in agriculture.

It enables the use of massive amounts of data that were previously inaccessible due to technological limitations.

And now, with AI on that foundation model, we can tap into the vast amount of geophysical data that is available.

It basically advance our recognition of what we need to do in order to be more sustainable, but also using it from a business standpoint, for example, in the agricultural or consumer space and there are more that we can do with it.

I believe you asked me about one of my favourite use cases, this is it. There are so many details we learn about just the land that surrounds us, and there's so much data that we can use to make better decisions, both in terms of sustainability and business. The best part is that it is not mutually exclusive.


Eunice Olsen:

As we are currently seated in this luxurious BMW, it is worth noting that BMW has been enhancing their manufacturing processes. In the United States, specifically at their Plant Spartanburg, BMW manufactures more than 1500 vehicles on a daily basis. The use of AI involves the placement of half a million metal studs on the SUVs, ensuring precision and accuracy. There are also a total of 26 cameras integrated into the entirety of the production line, and they will detect any errors or issues that may arise, or any information that needs to be brought to the attention of the humans overseeing the operation. How can a IBM AI foundation model enhance productivity and streamline tasks, and operations for businesses?


Agnes Heftberger:

If we just look at a couple of examples that we see a lot of focus on in the organisations that we work with, they all fall into this category of trying to basically automate and, you know, get rid of workflows, processes that are very tedious or very complicated.

Eunice Olsen:

BMW has consistently supported the use of responsible AI, and indeed, the concept of responsibility is gaining significant importance globally. For instance, they adhere to seven principles, one of which involves examining human agency and oversight. There is a deliberate effort to incorporate sufficient human oversight in the decision-making process of AI applications, while also exploring mechanisms for humans to override algorithmic decisions. Now, that is nearly equivalent to a post. Let us discuss the topic of responsible AI design, which I think are equally important. However, it would be beneficial to hear from the technologists. Are both of them equally significant, or is there something that requires a slightly higher degree of attention?


Agnes Heftberger:

First and foremost, I believe it is critical that we discuss responsible and ethical AI, and I am grateful that you have brought this up, as well as that BMW has placed such emphasis and importance on this, and we've been talking about this for a long time. It's really encouraging to see that there's a lot of awareness, you know, rising, and people are really spending time understanding what it means, and if decision-makers really spend time understanding how they can actually put it into action, and I believe both types of actions are important. Having a framework for what it means to have responsible AI and what responsible AI is, what the principles that we should adhere to is important, as is the other element, how do we actually design AI that is responsible by nature. And you can't just have one.

In IBM have five principles, not seven, but they are extremely important to us. It means that the outcome of AI must be explainable. So explainability is essential. Transparency is also essential. If you're interacting with a piece of technology as a human, you should be aware that AI is at work, and you should be able to rely on the outcome despite the vastness. We've heard and talked a lot about hallucinations issues, the robustness, the privacy, and also how the data that's being used to train AI, but workflow also need to be super secure. That is a major concern shared by many CEOs on how they can ensure that privacy is maintained.

The other issue is fairness. We've talked a lot about how to ensure that the outcome of an AI decision is fair. For example, in the banking space, whether someone gets a loan or not, how do you ensure that there is fairness?


Eunice Olsen:

I am so glad to have the opportunity to conduct an interview with a female guest on the show. A female technologist or a woman working in the field of STEM. Indeed, addressing biases in AI is of extremely important. Humans will input data into AI systems. It is necessary to incorporate a gender element in order to address biases and ensure equal representation in AI.


Agnes Heftberger:

The lack of bias or the need for fairness, needs to be handled through both principles that you have as a company that either develops or uses this type of technology, or both, but it also needs to be built into the technology. This is a key element that we want to ensure is done by us as well, that we not only have a great representation of different, parts of society when we develop the technology, but that we also provide the toolkit that allows companies to test. Is there a problem with our model? Do we have prejudice? We need to look at it like a life cycle management overall, and we actively look for any problems with the fairness principle in the technology itself. I believe it is extremely important to understand the principles of technology. On the company side, you need to have diverse teams to actually work on the technology, so that you can test for any bias and drift issues.


Eunice Olsen:

Since we are still on the topic of ethics, let us address the question of whether or not AI should be permitted to participate in regulatory and ethical decision-making. Should AI be permitted in this realm?


Agnes Heftberger:

In IBM use it for very precise regulation and we refer to it as precision regulation. AI is and will be used in a wide range of applications, each with its own set of risks. Here is an example. Is there really a big risk if you use an AI-powered chat bot that recommends which restaurant to go to? Most likely not. Are you dealing with regulations? Most likely, not much. But dealing with company employees or, using AI in healthcare, medical environments, driverless cars, there is definitely risk involved, and there should be regulation. Being precise in the regulation in line with the risk that we actually have from AI, I believe that's really important to have the balance.

The use of AI must increase people's trust, but it must also be used in novel ways that do not stifle innovation.


Eunice Olsen:

For small companies or even individuals thinking about the impact of this, how does this affect me? How can I get onto this bandwagon? What is your advice?


Agnes Heftberger:

First and foremost, I believe it is incumbent on the technology industry to build trust with individuals, particularly owners of small and medium-sized businesses. We can see a lot of advantages in this technology. I mean, you can almost feel it and probably sense from my excitement. But it will only happen if people and society as a whole have trust in this technology. Obviously, it is up to us in the technology industry, as well as the government and everyone else, to build that trust.

It's also important that we learn and understand how AI is changing the world. Everyone, regardless of what role you're in or what role you play in your life, should read up on AI, be educated, and build the skills for themselves, so that we can understand the potential pitfalls or the opportunities, and what we are dealing with. I believe that there are opportunities for everyone, not just university students, to tap into the possibilities and responsibilities that come with AI.


 

Content in partnership with BMW Group Asia

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page