Telecom News Roundup: March 2023

4 min read

The weather is getting increasingly nicer — so nice, in fact, that it might draw attention away from the most recent developments in the telco ecosystem. Fortunately, our monthly roundup is here to help you stay in the know!

Google opens limited access to its AI chatbot 

Google has announced the limited public rollout of its chatbot product Bard, which is being billed as an “early experiment that lets you collaborate with generative AI.” UK and US users can access Bard by visiting and joining a waitlist. The chatbot is based on Google’s own LaMDA model, and works in much the same way as OpenAI’s ChatGPT and Microsoft’s Bing chatbot. Bard can answer “NORA” queries – those to which there is no one right answer – but users are warned that the chatbot may display “inaccurate or offensive information that doesn’t represent Google’s views”. At the bottom of the answer, users can rate it, restart the conversation, or click on a “Google It” button to switch to the search engine. Bard is a separate product from Google’s search engine, although it will likely raise questions about plagiarism and the relationship between Google and third-party websites.

Bard has been described as a way to “boost your productivity, accelerate your ideas and fuel your curiosity”, and can answer questions on topics as diverse as reading more books, quantum physics and outlining blog posts. It has been created as a “long process”, with the public rollout being the first step. It remains to be seen how content creators and regulators will respond to the new product, which is currently only available in a limited public rollout.

In the meantime, GMS has launched its own AI-powered conversational chatbot solution that will help enterprises enhance their communications efficiency across numerous use scenarios.

MNO losses to robocalls might reach $70 billion by 2027

Fraudulent robocalls are set to cost mobile subscribers a record $58 billion globally in 2023, an increase from $53 billion in 2022, according to a new study by Juniper Research. The rise in losses will be driven by the increase in various scam calls aimed at end users, such as unauthorised call forwarding or caller ID spoofing. The report predicts that fraudsters’ ability to innovate their methods will drive these losses to reach $70 billion globally by 2027. North America continues to be the most impacted region, as its wealth provides larger monetary opportunities for criminals and will account for over half of the losses to robocalling in 2023.

Despite the ongoing development of robocalling mitigation frameworks, the report urges stakeholders formulating frameworks outside North America to focus on region-specific methods of fraud. The report identified emerging brand authentication technology as key in fraud mitigation frameworks, instilling consumer trust in voice channels. This technology enables users to verify the call’s origin on the smartphone screen before answering and will be the most prominent solution to protecting subscribers from robocalling fraud. The report anticipates that fraudulent losses arising from robocalling will decline for the first time in North America by 2025, owing to the widespread adoption of the STIR/SHAKEN framework.

WhatsApp stands its ground to keep conversations encrypted

WhatsApp say they would rather have their app blocked in the UK than compromise its messaging encryption under the proposed Online Safety Bill. Head Will Cathcart has vowed that the company will refuse to comply if asked to weaken encrypted message privacy. Critics of the bill say that it grants Ofcom the power to require private encrypted messaging apps to adopt accredited technology to identify and remove child abuse material. However, according to Cathcart, undermining the privacy of WhatsApp messages would also do so for all users. WhatsApp is used by over 70% of online UK adults, according to Ofcom. Meredith Whittaker, president of Signal, another popular encrypted messaging app also said that the company would “absolutely, 100% walk” and stop providing services in the UK if required by the bill to weaken the privacy of its encrypted messaging system.

The government and child protection charities have long argued that encryption hinders efforts to tackle online child abuse. However, critics say the only way to check encrypted messages for child sexual abuse material would be to have services scan them on a device such as a phone before they are encrypted and sent, undermining encryption’s privacy benefits. The government denies that the bill represents a ban on end-to-end encryption and says it is possible to have both privacy and child safety. The Information Commissioner’s Office has said any interventions that could weaken encryption must be “necessary and proportionate”.

Ofcom shares new research on the internet fraud

87% of internet users in the United Kingdom have experienced a potential online scam or fraud, Ofcom reports. Examples include impersonation scams, fake dating apps or employment offers, counterfeit goods and scam calls. Worryingly, 46% of respondents admitted they had been personally drawn in by these scams, and 39% said they knew someone who had been.

Opinions vary about whose job it should be to prevent scams in the first place, with 61% of respondents thinking it is up to online tech companies to tackle the problem – and the country’s government also broadly agrees with this sentiment, developing new laws to protect users against online fraud and scams. On the other hand, 54% of respondents in Ofcom’s research think it is a user’s responsibility not to fall victim to a scammer, while the same proportion thinks the police should deal with it. Meanwhile, 30% believe it should be Ofcom’s job.

Add Your Heading Text Here