Science & Technology

PERUGIA: The rise of artificial intelligence has forced an increasing number of journalists to grapple with the ethical and editorial challenges posed by the rapidly expanding technology.

AI’s role in assisting newsrooms or transforming them completely was among the questions raised at the International Journalism Festival in the Italian city of Perugia that closes on Sunday.

What will happen to jobs?

AI tools imitating human intelligence are widely used in newsrooms around the world to transcribe sound files, summarise texts and translate.

In early 2023, Germany’s Axel Springer group announced it was cutting jobs at the Bild and Die Welt newspapers, saying AI could now “replace” some of its journalists.

Generative AI — capable of producing text and images following a simple request in everyday language — has been opening new frontiers as well as raising concerns for a year and a half.

One issue is that voices and faces can now be cloned to produce a podcast or present news on television. Last year, Filipino website Rappler created a brand aimed at young audiences by converting its long articles into comics, graphics and even videos.

Media professionals agree that their trade must now focus on tasks offering the greatest “added value”.

“You’re the one who is doing the real stuff” and “the tools that we produce will be an assistant to you,” Google News general manager Shailesh Prakash told the festival in Perugia.

The costs of generative AI have plummeted since ChatGPT burst onto the scene in late 2022, with the tool designed by US start-up OpenAI now accessible to smaller newsrooms.

Colombian investigative outlet Cuestion Publica has harnessed engineers to develop a tool that can delve into its archives and find relevant background information in the event of breaking news.

But many media organisations are not making their own language models, which are at the core of AI interfaces, said University of Amsterdam professor Natali Helberger.

The disinformation threat

According to one estimate last year by Everypixel Journal, AI has created as many images in one year as photography in 150 years.

That has raised serious questions about how news can be fished out of the tidal wave of content, including deepfakes.

Media and tech organisations are teaming up to tackle the threat, notably through the Coalition for Content Provenance and Authenticity, which seeks to set common standards.

From Wild West to regulation

Media rights watchdog Reporters Without Borders, which has expanded its media rights brief to defending trustworthy news, launched the Paris Charter on AI and journalism late last year.

“One of the things I really liked about the Paris Charter was the emphasis on transparency,” said Anya Schiffrin, a lecturer on global media, innovation and human rights at Columbia University in the United States.

AI editorial guidelines are updated every three months at India’s Quintillion Media, said its boss Ritu Kapur. None of the organisation’s articles can be written by AI and the images it generates cannot represent real life.

Resist or collaborate?

AI models feed off data, but their thirst for the vital commodity has raised hackles among providers. In December, the New York Times sued OpenAI and its main investor Microsoft for violation of copyright.

In contrast, other media organisations have struck deals with OpenAI: Axel Springer, US news agency AP, French daily Le Monde and Spanish group Prisa Media, whose titles include El Pais and AS newspapers.

With resources tight in the media industry, collaborating with the new technology is tempting, explained Emily Bell, a professor at Columbia University’s journalism school. She senses a growing external pressure to “Get on board, don’t miss the train”

In a recent groundbreaking discovery, paleontologists in India's Gujarat found fossils of what is believed to be the world's biggest snake.

The serpent, named Vasuki Indicus, was a massive predator that could rival the longest snake to ever exist on Earth, according to Times of India.

The fossils, which were discovered by researchers at the Indian Institute of Technology Roorkee (IITR), measured between 10 to 15 metres in length and are estimated to be about 47 million years old.

Professor Sunil Bajpai and Debajit Datta, a postdoctoral fellow at IITR made the discovery and co-authored a study published in the journal Scientific Reports.

Datta noted that a serpent described similarly in ancient Hindu scriptures has been revered under the name Vasuki for countless ages.

They suggest that Vasuki indicus, which lived during a period when Earth's geography was vastly different from today, could have been comparable in size to the famous Titanoboa.

This reptile is believed to have had a broad and cylindrical body, suggesting a strong and robust physique.

Datta explained: "Vasuki was a majestic animal. It may well have been a gentle giant, resting its head on a high porch formed by coiling its massive body for most parts of the day or moving sluggishly through the swamp like an endless train".

The snake's habitat in a marshy swamp near the coast was in a warmer global climate compared to the present day, likely playing a role in facilitating its immense size.

"This discovery is significant not only for understanding the ancient ecosystems of India but also for unravelling the evolutionary history of snakes on the Indian subcontinent.

"It underscores the importance of preserving our natural history and highlights the role of research in unveiling the mysteries of our past," Dr Bajpai, said in a statement.

For the first time since the services of X, formerly Twitter, were suspended in Pakistan, the social media site has issued a statement on the matter, stating that it continues to work with the relevant authorities regarding their concerns.

"We continue to work with the Pakistani Government to understand their concerns," read the statement, posted by X's Global Government Affairs team on their official handle.

The social media website, which was bought by American tech billionaire Elon Musk in April 2022, remains suspended in the country since February 17.

The suspension of the website occurred over a week after the general elections in Pakistan on February 8 that triggered debate on the fairness and transparency of the polls. Users reported problems accessing the site, but no official comment was issued by the government of Pakistan on the matter.

However, the interior minister, in its report submitted to the Islamabad High Court in a separate case related to the suspension, maintained that it shut the website following national security concerns in the wake of which the social media platform has issued its statement.

The ministry, in its report to IHC, stated that the "content uploaded on the internet" is a "threat" to the country's national security.

The decision to impose a ban on Twitter/X in Pakistan was made in the interest of upholding national security, maintaining public order, and preserving the integrity of our nation, it added.

"It is very pertinent to mention here that the failure of Twitter/X to adhere to the lawful directives of the government of Pakistan and address concerns regarding the misuse of its platform necessitated the imposition of a ban," the ministry said in the report. 

It added that X is not registered in Pakistan and is not a party to the agreement to abide by Pakistani laws.

The non-cooperation of X authorities justified regulatory measures against X, including temporary closure, as the government had no other option, said the Interior Ministry.

The Federal Investigation Agency's (FIA) cybercrime wing requested X to ban accounts propagating against the chief justice, but X officials ignored the request and did not even respond, the report stated.

Social media platforms, it added, are being used indiscriminately for spreading extremist ideas and false information, and X was being used as a tool by some evil elements to damage law and order and promote instability.

The ministry said that the  ban on X is aimed at the responsible use of social media platforms in accordance with the law.

It is the protector of the citizens of Pakistan and responsible for national stability, it added. Earlier, the social media platform TikTok was also banned by the government, but the ban was lifted after TikTok signed an agreement to abide by the Pakistani law.

Due to security reasons, social media platforms are banned by various countries around the world, mentioned the report.

On the request of intelligence agencies, the Interior Ministry issued orders for the closure of X on February 17, 2024.

The application against the closure of X is against the law and facts, is not admissible, and should be dismissed, it argued, citing the plea filed in the court challenging the suspension.

However, the Sindh High Court (SHC) a day earlier directed the Ministry of Interior to revoke its letter regarding the suspension of social media platform within one week.

ChatGPT creator OpenAI opened a new office in Tokyo on Monday, the first Asian outpost for the groundbreaking tech company as it aims to ramp up its global expansion.

Thanks to the stratospheric success of its generative tools that can create text, images and even video, OpenAI has become a leader in the artificial intelligence revolution and one of the most significant tech companies in the world.

The Japan office is the latest part of the Microsoft-backed firm’s international push, having already set up bases in London and Dublin.

“We’re excited to be in Japan which has a rich history of people and technology coming together to do more,” OpenAI CEO Sam Altman said in a statement.

“We believe AI will accelerate work by empowering people to be more creative and productive, while also delivering broad value to current and new industries that have yet to be imagined.”

OpenAI said its Japan office would bring it closer to enterprise clients — including global auto leader Toyota, tech conglomerate Rakuten and industrial giant Daikin — that are using its products “to automate complex business processes”.

“We chose Tokyo as our first Asian office for its global leadership in technology, culture of service, and a community that embraces innovation,” the company added.

OpenAI also announced a new Japanese-language version of ChatGPT on Monday, and hailed the country as a “key global voice on AI policy”, offering potential solutions to issues such as labour shortages.

The company said its Japan office would also help “accelerate the efforts of local governments, such as Yokosuka City” in their drive to improve the efficiency of public services.

The Tokyo ‘buzz’
The San Francisco-based firm has been reportedly in discussions with hundreds of companies as it looks to expand revenue sources.

OpenAI’s chief operating officer Brad Lightcap told Bloomberg in an interview published this month that the firm has seen huge demand for its corporate version of ChatGPT.

“We have a very global base of demand,” he said in the interview. “So we want to show up where our customers are. We feel a lot of pull from places like Japan and Asia broadly.”

OpenAI, reportedly valued at $80 billion or more earlier this year, is the latest major tech firm to invest in Japan.

Microsoft, one of OpenAI’s biggest investors, last week announced a separate $2.9bn investment to provide Japan with the powerful graphics processing units crucial for running AI apps, and to train three million Japanese workers in AI skills.

Amazon Web Services is spending $14bn to expand its cloud infrastructure in Japan, while Google has launched a regional cybersecurity hub in the country.

Experts say geopolitical tensions have made Japan an increasingly attractive partner for tech firms compared to China, in addition to advantages such as supportive policies and a highly educated talent pool.

“What happens in Tokyo can create a buzz,” Hideaki Yokota, vice president of the MM Research Institute, told AFP. “A base in Tokyo should help (OpenAI) attract much young talent.”

Widespread adoption of artificial intelligence (AI) and machine learning technologies in recent years has provided “threat actors with sophisticated new tools to perpetrate attacks”, cybersecurity company Kaspersky Research said in a press release on Saturday.

The security firm explained that one such tool was deepfake which includes generated human-like speech or photo and video replicas of people. Kaspersky warned that companies and consumers must be aware that deepfakes will likely become more of a concern in the future.

A deepfake — a portmanteau of deep learning and fake — synthesised “fake images, video and sound using artificial intelligence”, Kaspersky explains on its website.

The security firm warned that it had found deepfake creation tools and services available on “darknet marketplaces” to be used for fraud, identity theft and stealing confidential data.

“According to the estimates by Kaspersky experts, prices per one minute of a deepfake video can be purchased for as little as $300,” the press release reads.

According to the press release, a recent Kaspersky survey found that 51 per cent of employees surveyed in the Middle East, Turkiye and Africa region said they could tell a deepfake from a real image. However, in a test, only 25pc could distinguish a real image from an AI-generated one.

“This puts organisations at risk given how employees are often the primary targets of phishing and other social engineering attacks,” the firm warned.

“Despite the technology for creating high-quality deepfakes not being widely available yet, one of the most likely use cases that will come from this is to generate voices in real-time to impersonate someone,” the press release quoted Hafeez Rehman, technical group manager at Kaspersky, as saying.

Rehman added that deepfakes were not only a threat to businesses, but to individual users as well. “They spread misinformation, are used for scams, or to impersonate someone without consent,” he said, stressing that they were a growing cyber threat to be protected from.

The Global Risks Report 2024, released by the World Economic Forum in January, had warned that AI-fuelled misinformation was a common risk for India and Pakistan.

Deepfakes have been used in Pakistan to further political aims, particularly in anticipation of general elections.

Former prime minister Imran Khan — who is currently incarcerated at Adiala Jail — had used an AI-generated image and voice clone to address an online election rally in December, which drew more than 1.4 million views on YouTube and was attended live by tens of thousands.

While Pakistan has drafted an AI law, digital rights activists have criticised the lack of guardrails against disinformation, and to protect vulnerable communities.

The United States has topped long-time leader China as Taiwan’s main export market for four consecutive months due to a surge in demand for microchip products and AI technology, Taipei’s finance ministry said on Friday.

Self-ruled Taiwan is a microchip-manufacturing powerhouse, churning out the world’s most advanced silicon wafers necessary to power everything from e-vehicles and satellites to fighter jets and increasingly to power AI technology.

For two decades, its top export market has been China — which claims Taiwan as part of its territory — but December data from Taiwan’s finance ministry shows the United States topping the list for the first time since August 2003.

In December, Taiwan exported $8.49 billion in products to the United States, compared with $8.28bn to mainland China.

The trend continued through March, when US exports increased by 65 per cent to $9.11bn, a 6pc jump, while mainland China received $7.99bn.

Those figures exclude Hong Kong, which holds its own status as a customs territory. When combined with mainland tallies, China remains the top destination for Taiwanese goods.

Taiwan’s finance ministry official in the trade division attributed the recent US tilt to the global “reorganisation of electronics and ICT (information and communication technology) supply chains, and the popularity of the AI industry”.

Since Taiwanese President Tsai Ing-wen came to power in 2016, she has been working to strengthen economic ties with the United States, seeing Washington as a crucial partner as neighbouring China grows increasingly aggressive.

WASHINGTON: When a rare total solar eclipse sweeps across North America on Monday, scientists will be able to gather invaluable data on everything from the Sun’s atmosphere to strange animal behaviors — and even possible effects on humans.

It comes with the Sun near the peak of its 11-year solar cycle, setting the stage for a breathtaking display: The corona will glow spectacularly from the Moon’s silhouette along the path of totality, a corridor stretching from Mexico to Canada via the United States.

Total solar eclipses offer “incredible scientific opportunities,” Nasa Deputy Administrator Pam Melroy told a press conference this week about the celestial event.

The US space agency is one of the institutions at the ready for the eclipse, with plans to launch so-called “sounding rockets” to study the effects on Earth’s upper atmosphere.

When the Moon passes directly in front of the Sun and blocks it, the elusive outermost edge of the Sun’s atmosphere, or corona, will be visible “in a very special way,” Melroy said. “Things are happening with the corona that we don’t fully understand,” she said.

The heat within the corona intensifies with distance from the Sun’s surface — a counterintuitive phenomenon that scientists struggle to fully comprehend or explain.

Solar flares, a sudden explosion of energy that releases radiation into space, take place in the corona as do solar prominences, enormous plasma formations that loop out from the Sun’s surface.

Go to top