Cisco’s Lamberty agrees that we are living in unpredictable times – he mentions the pandemic and the Russia-Ukraine war as particular recent causes for concern – and that for many people, “it’s going to be new, and it’s going to be tough”.
For example, tech entrepreneur Elon Musk described AI as the most disruptive force in history while speaking to UK Prime Minister Rishi Sunak in November 2023, before also claiming that at some point no job will be needed because AI will do everything. And a report by investment bank Goldman Sachs from spring 2023 stated that AI could replace a quarter of work tasks in the US and Europe but may also mean new jobs and a productivity boom.
According to the GSMA (the association representing the interests of mobile network operators worldwide), AI is a powerful, emerging force that is transforming business and society. PwC estimates that it could contribute $15.7 trillion to the global economy by 2030. However, it isn’t a futuristic technology; it’s used across a wide variety of industries and the likelihood is that many people will already have encountered AI in their daily lives as it is employed, for example, to recommend viewing choices on Netflix, or in voice assistants like Siri or Alexa.
summarises Martinkenaite, highlighting that the real benefits, or potential risks, of AI are related to how people develop and use it. She also points out that AI as such is no more or less than a powerful software, and in order to embrace the power of this technology and unlock significant value, human action is necessary. Similarly, Lamberty says that AI is “a piece of technology you can use in the right way, or in the wrong way. It has flaws, and we have to recognise and talk about these.”
The Green Radio project aims to reduce energy consumption in mobile networks by using AI algorithms to optimise traffic and automatically adjust data usage on each base station. A pilot project in Denmark demonstrated a potential 2.5 percent reduction in power usage, saving the equivalent of around 700 tonnes of CO2 emissions.
Telenor’s Finnish operator, DNA, expected energy consumption to grow six to seven percent in 2021 on deploying 5G equipment and increasing capacity four- to five-fold. What they witnessed was a reduction in energy consumption on some areas of the network. By using data sets and AI to analyse behaviour and predict mobile data traffic requirements, the operator has been able to optimise its energy requirements.
Telenor contributes to big data analysis for responsible AI both in Europe and Asia. This work has, among other things, helped predict and control the spread of dengue fever in Pakistan and malaria in Bangladesh.
Before digging deeper into these flaws, it’s important to acknowledge that there’s tremendous excitement around AI in the workplace because of the benefits it provides: it can enhance productivity, streamline processes, and improve decision-making. For example, it automates repetitive tasks such as data entry or invoice processing, analyses large datasets quickly to provide insight, provides virtual assistants, chatbots and personalised learning, and helps prevent security threats.
Lamberty’s colleague Cathrine Andersen, Global Account Manager in the Service Provider segment at Cisco, is enthusiastic about its impact: “New technologies release you from tasks that you’re not really gaining anything from personally. You can free up more time to actually be more human, be more creative, think about how to apply the technology.” She argues that AI is doing the opposite of reducing jobs; it’s making them more enjoyable, challenging, and creative.
Telenor’s Martinkenaite agrees, saying that she’s looking for ways to use this technology to augment her capabilities, for example helping her prepare PowerPoint presentations from previously produced content. “It reduces my time on routine tasks so that I can use it to coach my people as a leader, or to put Telenor’s thought leadership position towards key stakeholders. It’s an example of how, in everyday life, we can use technology to empower us to do more enriching tasks.”
Generative AI has, for instance, had an impact on the workplace, improving the productivity of workforces. One example is the use of GitHub Copilot for programming tasks; its powerful code autocomplete functionality improves speed, reduces error possibilities, and carries out de-bugging for programmers. Meanwhile, ChatGPT can produce a wide variety of writing, including marketing copy and blog posts, for example.
Cisco is using AI to find ways of connecting employees from different teams, units and countries who don’t already know each other. AI can be used to analyse social interactions within the organisation’s chat rooms to identify what kind of topics an employee has expertise in. The next step would then be to use this analysis to find an expert to solve a problem elsewhere within the organisation – in other words, connecting someone with a challenge to someone who can help. This helps facilitate conversations and collaboration, strengthens networks, and creates cross-functional communities of people around a certain topic.
Despite the benefits that AI offers, concerns have also been raised – not only related to the fears that Martinkenaite raises above regarding reskilling and job loss, but also that the technology could be misused by dishonest actors or that the conclusions AI makes are not always 100% correct. This could have negative consequences such as threats to privacy, social manipulation, and discrimination. “The buzzword here is ‘responsibility’,” says Lamberty. “We act in a very conscious and responsible way with regards to how we handle our own data, how we treat our people, our customers and the communities that we are located in.”
But what exactly is responsible AI? As Martinkenaite explains, it’s a set of principles and accountabilities that companies set for themselves which include explainability of how AI works, transparency towards customers, partners and stakeholders, having a human in the loop for high-risk decisions, and securing customers’ data from actors with malicious intent.
For tech companies like Telenor and Cisco, it’s a delicate balancing act between channelling employees’ natural enthusiasm for new technologies while also highlighting the risks. “We’re not trying to slow people down from leveraging this technology, but we are making them aware that the risks are there,” Lamberty explains. And Martinkenaite agrees that it’s important not to slow down AI adoption, and therefore impede innovation, while also ensuring regulation is being developed to mitigate risk.
Many companies are already embracing AI and self-regulating via their own guidelines and frameworks – such as Telenor and Cisco. Meanwhile, numerous governments are also working on their own regulations, one example being the AI Act that is currently being developed by the EU. Martinkenaite highlights Telenor’s view that organisations can already voluntarily commit to and establish guidelines. In Europe, for example, these guidelines should be inspired by the current draft of the AI Act and the ethical guidelines from the High-Level Expert Group on Artificial Intelligence (AI HLEG) under the auspices of the European Commission.
The AI Act, most provisions of which are expected to come into effect in mid-2026, will be the world’s first comprehensive AI law. It classifies different AI systems according to the risk they pose to users, with different risk levels meaning more or less regulation:
Unacceptable risk: these are AI systems considered a threat to people and will be banned. An example: Social scoring, where people are classified based on their behaviour, socio-economic status, or personal characteristics.
High risk: these are AI systems that could negatively affect safety or fundamental rights; they will be assessed before being put on the market and also throughout their lifecycle. An example: the recruitment of new employees where data-driven models are used to find the best candidates. Here, companies developing such systems must ensure, for instance, that the training data used will not produce biased outcomes. AI won’t be allowed to automatically select candidates without a human in the loop, so as to reduce the risk of unfair discrimination.
AI interacting with individuals: such AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with an application, the user can decide whether they want to continue using it. An example: the use of chatbots for customer service purposes. Here, customers should be made aware when they are interacting with AI and not with a human.
The AI Act will also make provisions for generative AI (like ChatGPT), which would also have to comply with transparency requirements.
Source: European Parliament: EU AI Act: first regulation on artificial intelligence
The AI Act, most provisions of which are expected to come into effect in mid-2026, will be the world’s first comprehensive AI law. It classifies different AI systems according to the risk they pose to users, with different risk levels meaning more or less regulation:
Unacceptable risk: these are AI systems considered a threat to people and will be banned. An example: Social scoring, where people are classified based on their behaviour, socio-economic status, or personal characteristics.
High risk: these are AI systems that could negatively affect safety or fundamental rights; they will be assessed before being put on the market and also throughout their lifecycle. An example: the recruitment of new employees where data-driven models are used to find the best candidates. Here, companies developing such systems must ensure, for instance, that the training data used will not produce biased outcomes. AI won’t be allowed to automatically select candidates without a human in the loop, so as to reduce the risk of unfair discrimination.
AI interacting with individuals: such AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with an application, the user can decide whether they want to continue using it. An example: the use of chatbots for customer service purposes. Here, customers should be made aware when they are interacting with AI and not with a human.
The AI Act will also make provisions for generative AI (like ChatGPT), which would also have to comply with transparency requirements.
While encouraging companies to implement their own guidelines, Telenor’s Martinkenaite also highlights the need for greater awareness and education. For wider AI adoption, it’s important that individuals understand how this technology works, its benefits and its potential applications, ensuring that they are more likely to embrace it. “Now more than ever, there’s a need for more programmes not only for specialists, like data or machine learning engineers, but more importantly for what I call ‘data citizens’,” she says.
Cisco’s Andersen and Lamberty also promote the value of learning – and in particular from the latest generations to enter the workplace. As Lamberty says: “The next generation is generally more flexible and tends to be more willing to accept unpredictability. When something happens that wasn’t predicted, they try to figure out how they can make the best of it.” He also explains that, from an organisational perspective, a key way to deal with unpredictability is agility: adjusting quickly, short iteration cycles, faster decision-making. Here, technology – while a cause of unpredictability – can also help make those adjustments and, conversely, manage unpredictability.
Andersen agrees with Lamberty that we are seeing behavioural change in the workplace due to the speed of change. She underlines that it’s important to harness the eagerness that young talents have and recommends that organisations consider if they could reshape more traditional processes, and in particular offer more mobility and present attractive job opportunities at an earlier stage in an employee’s career path.
By the same token, she recognises the eagerness to use technology as a positive – so it’s better to be open and encouraging rather than try to prevent it, because “people are people, and they are naturally more curious than anxious. I noticed from my fellow peers that the fear of technology soon disappears when you start to see applicability.”
Lamberty adds that one of the reasons behind the success of ChatGPT is its ease of use, which requires no specific technical training or background. He also explains that his experience with the technology underlines how transformational it is and why it should be embraced.
And Martinkenaite agrees that the feeling at Telenor is equally positive, adding that the company is fundamentally optimistic about the possibilities that AI offers, believing that the technology will provide opportunities for both the private and public sectors. As she sums up:
“It’s a powerful technology that enables people to make better decisions, do exciting things and make a real impact.”
For more in-depth insights into the key facets of AI, check out newly published report on the rise of artificial intelligence: New threats, new regulations and new solutions.
For a more in-depth look at how Telenor is helping find answers to the cybersecurity challenges, check out the security paper here.