But change is a given – and tech is often a major driver of such change.

So, how can we ‘future-proof’ ourselves in today’s unpredictable world?

“It's more like a dialogue, a conversation – but with a machine,”

says Tom Lamberty, Innovation Fellow for People Experience at Cisco, on his experience with generative AI tools like ChatGPT.

16%

12%

25%

age 18 - 29

age 40 - 49

age 60 +

"And it often sparks something on my end in terms of looking at a topic in a new way that I hadn't thought about before."

Even though human-computer interaction isn’t a new area, as Lamberty points out, the generative AI tools such as ChatGPT, Bard, DALL-E and GitHub Copilot that have been released in this decade offer a much more natural, conversation-like form of communication. The impact of this technology – the ease, speed, productivity, and even creativity that it offers – suddenly feels much more immediate.

In fact, ChatGPT has seen a faster rate of adoption than Instagram or TikTok, and its arrival has revealed more about the potential of AI to wider numbers of people. As Ieva Martinkenaite, SVP, Head of Research and Innovation at Telenor Group, explains, since OpenAI released ChatGTP in November 2022, AI has not only become a general topic of conversation, but it has also been included on board meeting agendas and “there’s a very strong momentum at all levels of the organisation to put AI on the map”.

She traces this back to the ‘fear of missing out’ on the opportunities AI offers, and contrasts that with the ‘fear of messing up’. The latter not only involves more individual fears around not having the skills to deal with the latest technology, or no longer being relevant on the job market, but also addresses wider concerns around security, privacy and whether automation is replacing humans.

“We know that the journey to realising these opportunities is demanding and that there are many important steps for us to take. There are also many questions: What is ethical, what is not; how can we safeguard ourselves? And what do companies need to do?”
Ieva Martinkenaite, SVP, Head of Research and Innovation at Telenor Group.

Cisco’s Lamberty agrees that we are living in unpredictable times – he mentions the pandemic and the Russia-Ukraine war as particular recent causes for concern – and that for many people, “it’s going to be new, and it’s going to be tough”.

For example, tech entrepreneur Elon Musk described AI as the most disruptive force in history while speaking to UK Prime Minister Rishi Sunak in November 2023, before also claiming that at some point no job will be needed because AI will do everything. And a report by investment bank Goldman Sachs from spring 2023 stated that AI could replace a quarter of work tasks in the US and Europe but may also mean new jobs and a productivity boom.

“There is concern, rightly so, because the job landscape is going to change, probably significantly in some areas. However, I think in the long run, AI is going to create more jobs than it will take away. We had a similar discussion around 20 years ago, so we have seen this evolution before. Change is a given. And tech is a major driver of this change, if not the most important driver.”
Tom Lamberty, Innovation Fellow for People Experience at Cisco

But what exactly is AI – and what isn’t it?

According to the GSMA (the association representing the interests of mobile network operators worldwide), AI is a powerful, emerging force that is transforming business and society. PwC estimates that it could contribute $15.7 trillion to the global economy by 2030. However, it isn’t a futuristic technology; it’s used across a wide variety of industries and the likelihood is that many people will already have encountered AI in their daily lives as it is employed, for example, to recommend viewing choices on Netflix, or in voice assistants like Siri or Alexa.

“AI is a set of technologies that enable us to make better business decisions and automate routine tasks, ultimately making us more productive and fulfilling at work,”

summarises Martinkenaite, highlighting that the real benefits, or potential risks, of AI are related to how people develop and use it. She also points out that AI as such is no more or less than a powerful software, and in order to embrace the power of this technology and unlock significant value, human action is necessary. Similarly, Lamberty says that AI is “a piece of technology you can use in the right way, or in the wrong way. It has flaws, and we have to recognise and talk about these.”

How is Telenor helping unlock the potential of AI?

The Green Radio project aims to reduce energy consumption in mobile networks by using AI algorithms to optimise traffic and automatically adjust data usage on each base station. A pilot project in Denmark demonstrated a potential 2.5 percent reduction in power usage, saving the equivalent of around 700 tonnes of CO2 emissions.

Telenor’s Finnish operator, DNA, expected energy consumption to grow six to seven percent in 2021 on deploying 5G equipment and increasing capacity four- to five-fold. What they witnessed was a reduction in energy consumption on some areas of the network. By using data sets and AI to analyse behaviour and predict mobile data traffic requirements, the operator has been able to optimise its energy requirements.

Telenor contributes to big data analysis for responsible AI both in Europe and Asia. This work has, among other things, helped predict and control the spread of dengue fever in Pakistan and malaria in Bangladesh.

Can AI make us ‘more human’?

Before digging deeper into these flaws, it’s important to acknowledge that there’s tremendous excitement around AI in the workplace because of the benefits it provides: it can enhance productivity, streamline processes, and improve decision-making. For example, it automates repetitive tasks such as data entry or invoice processing, analyses large datasets quickly to provide insight, provides virtual assistants, chatbots and personalised learning, and helps prevent security threats.

Lamberty’s colleague Cathrine Andersen, Global Account Manager in the Service Provider segment at Cisco, is enthusiastic about its impact: “New technologies release you from tasks that you’re not really gaining anything from personally. You can free up more time to actually be more human, be more creative, think about how to apply the technology.” She argues that AI is doing the opposite of reducing jobs; it’s making them more enjoyable, challenging, and creative.

“AI might free up enough time for people to start working on improving things that we’re unhappy with, creating more innovation, or improving internal processes. Often, these tasks get put aside because we’re trying to get through ‘to-dos’ that technology could help us with – and this would then give us time to say: ‘Hey, I have a great idea and I’ll start making it happen’.”
Cathrine Andersen, Global Account Manager in the Service Provider segment at Cisco

Telenor’s Martinkenaite agrees, saying that she’s looking for ways to use this technology to augment her capabilities, for example helping her prepare PowerPoint presentations from previously produced content. “It reduces my time on routine tasks so that I can use it to coach my people as a leader, or to put Telenor’s thought leadership position towards key stakeholders. It’s an example of how, in everyday life, we can use technology to empower us to do more enriching tasks.”

AI for creation and connections

Generative AI has, for instance, had an impact on the workplace, improving the productivity of workforces. One example is the use of GitHub Copilot for programming tasks; its powerful code autocomplete functionality improves speed, reduces error possibilities, and carries out de-bugging for programmers. Meanwhile, ChatGPT can produce a wide variety of writing, including marketing copy and blog posts, for example.

Cisco is using AI to find ways of connecting employees from different teams, units and countries who don’t already know each other. AI can be used to analyse social interactions within the organisation’s chat rooms to identify what kind of topics an employee has expertise in. The next step would then be to use this analysis to find an expert to solve a problem elsewhere within the organisation – in other words, connecting someone with a challenge to someone who can help. This helps facilitate conversations and collaboration, strengthens networks, and creates cross-functional communities of people around a certain topic.

But could concerns slow that creativity down?

Despite the benefits that AI offers, concerns have also been raised – not only related to the fears that Martinkenaite raises above regarding reskilling and job loss, but also that the technology could be misused by dishonest actors or that the conclusions AI makes are not always 100% correct. This could have negative consequences such as threats to privacy, social manipulation, and discrimination. “The buzzword here is ‘responsibility’,” says Lamberty. “We act in a very conscious and responsible way with regards to how we handle our own data, how we treat our people, our customers and the communities that we are located in.”

But what exactly is responsible AI? As Martinkenaite explains, it’s a set of principles and accountabilities that companies set for themselves which include explainability of how AI works, transparency towards customers, partners and stakeholders, having a human in the loop for high-risk decisions, and securing customers’ data from actors with malicious intent.

For tech companies like Telenor and Cisco, it’s a delicate balancing act between channelling employees’ natural enthusiasm for new technologies while also highlighting the risks. “We’re not trying to slow people down from leveraging this technology, but we are making them aware that the risks are there,” Lamberty explains. And Martinkenaite agrees that it’s important not to slow down AI adoption, and therefore impede innovation, while also ensuring regulation is being developed to mitigate risk.

Many companies are already embracing AI and self-regulating via their own guidelines and frameworks – such as Telenor and Cisco. Meanwhile, numerous governments are also working on their own regulations, one example being the AI Act that is currently being developed by the EU. Martinkenaite highlights Telenor’s view that organisations can already voluntarily commit to and establish guidelines. In Europe, for example, these guidelines should be inspired by the current draft of the AI Act and the ethical guidelines from the High-Level Expert Group on Artificial Intelligence (AI HLEG) under the auspices of the European Commission.

The AI Act, most provisions of which are expected to come into effect in mid-2026, will be the world’s first comprehensive AI law. It classifies different AI systems according to the risk they pose to users, with different risk levels meaning more or less regulation:

What’s unacceptable, high, and minimal risk?

Unacceptable risk: these are AI systems considered a threat to people and will be banned. An example: Social scoring, where people are classified based on their behaviour, socio-economic status, or personal characteristics.

High risk: these are AI systems that could negatively affect safety or fundamental rights; they will be assessed before being put on the market and also throughout their lifecycle. An example: the recruitment of new employees where data-driven models are used to find the best candidates. Here, companies developing such systems must ensure, for instance, that the training data used will not produce biased outcomes. AI won’t be allowed to automatically select candidates without a human in the loop, so as to reduce the risk of unfair discrimination.

AI interacting with individuals: such AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with an application, the user can decide whether they want to continue using it. An example: the use of chatbots for customer service purposes. Here, customers should be made aware when they are interacting with AI and not with a human.

The AI Act will also make provisions for generative AI (like ChatGPT), which would also have to comply with transparency requirements.

Source: European Parliament: EU AI Act: first regulation on artificial intelligence

What’s unacceptable, high, and minimal risk?

The AI Act, most provisions of which are expected to come into effect in mid-2026, will be the world’s first comprehensive AI law. It classifies different AI systems according to the risk they pose to users, with different risk levels meaning more or less regulation:

Unacceptable risk: these are AI systems considered a threat to people and will be banned. An example: Social scoring, where people are classified based on their behaviour, socio-economic status, or personal characteristics.

High risk: these are AI systems that could negatively affect safety or fundamental rights; they will be assessed before being put on the market and also throughout their lifecycle. An example: the recruitment of new employees where data-driven models are used to find the best candidates. Here, companies developing such systems must ensure, for instance, that the training data used will not produce biased outcomes. AI won’t be allowed to automatically select candidates without a human in the loop, so as to reduce the risk of unfair discrimination.

AI interacting with individuals: such AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with an application, the user can decide whether they want to continue using it. An example: the use of chatbots for customer service purposes. Here, customers should be made aware when they are interacting with AI and not with a human.

The AI Act will also make provisions for generative AI (like ChatGPT), which would also have to comply with transparency requirements.

Source: European Parliament: EU AI Act: first regulation on artificial intelligence

Dealing with the unknowns: let’s be optimistic

While encouraging companies to implement their own guidelines, Telenor’s Martinkenaite also highlights the need for greater awareness and education. For wider AI adoption, it’s important that individuals understand how this technology works, its benefits and its potential applications, ensuring that they are more likely to embrace it. “Now more than ever, there’s a need for more programmes not only for specialists, like data or machine learning engineers, but more importantly for what I call ‘data citizens’,” she says.

“What companies really need to do to mitigate the fears of messing up and of missing out is to explain how we are addressing our customers’ concerns and improving their experience with this tech – how we’re going to personalise offers and meet the customer across all channels. And I think companies need to really invest in upskilling and reskilling programmes at scale.”
Ieva Martinkenaite

Cisco’s Andersen and Lamberty also promote the value of learning – and in particular from the latest generations to enter the workplace. As Lamberty says: “The next generation is generally more flexible and tends to be more willing to accept unpredictability. When something happens that wasn’t predicted, they try to figure out how they can make the best of it.” He also explains that, from an organisational perspective, a key way to deal with unpredictability is agility: adjusting quickly, short iteration cycles, faster decision-making. Here, technology – while a cause of unpredictability – can also help make those adjustments and, conversely, manage unpredictability.

“I think we’re getting used to things moving so fast. Technology is moving really rapidly, and it’s almost making us humans a bit impatient, which sparks creative ideas in terms of what more can we achieve with this technology.”
Cathrine Andersen

Just try it!

Andersen agrees with Lamberty that we are seeing behavioural change in the workplace due to the speed of change. She underlines that it’s important to harness the eagerness that young talents have and recommends that organisations consider if they could reshape more traditional processes, and in particular offer more mobility and present attractive job opportunities at an earlier stage in an employee’s career path.

By the same token, she recognises the eagerness to use technology as a positive – so it’s better to be open and encouraging rather than try to prevent it, because “people are people, and they are naturally more curious than anxious. I noticed from my fellow peers that the fear of technology soon disappears when you start to see applicability.”

Lamberty adds that one of the reasons behind the success of ChatGPT is its ease of use, which requires no specific technical training or background. He also explains that his experience with the technology underlines how transformational it is and why it should be embraced.

“I’ve been sitting in meetings where people started to share their early experiences with generative AI and the momentum was almost like 40 years ago when the Internet arrived. People were tremendously excited; there’s so much opportunity and it’s really created a lot of energy in the organisation.”
Tom Lamberty

And Martinkenaite agrees that the feeling at Telenor is equally positive, adding that the company is fundamentally optimistic about the possibilities that AI offers, believing that the technology will provide opportunities for both the private and public sectors. As she sums up:

“It’s a powerful technology that enables people to make better decisions, do exciting things and make a real impact.”

For more in-depth insights into the key facets of AI, check out newly published report on the rise of artificial intelligence: New threats, new regulations and new solutions.

For a more in-depth look at how Telenor is helping find answers to the cybersecurity challenges, check out the security paper here.