OpenAI, the company that owns the most famous artificial intelligence tool in the world, ChatGPT, announced the launch of the “Voice Engine” tool, which specializes in voice cloning through an audio sample of only 15 seconds. This is according to the statement issued by the leading company after the results of a test of the new tool conducted on a small scale. The statement added: We realize that the ability to generate human-like sounds is a step that entails great risks, especially in this election year.
New News - Technology
Concern About the Use of Voice Cloning Technology to Mislead Public Opinion
The company, which specializes in artificial intelligence research, confirmed that the use of Voice Engine will be limited to avoid fraudulent incidents, especially during the current year, which is witnessing many electoral rounds in some countries.
Disinformation researchers fear that generative AI applications in election campaigns could be misused to influence public attitudes, especially audio cloning tools that are cheap, easy to use, and difficult to track.
This comes in conjunction with efforts by the world’s leading artificial intelligence laboratory, The AI Lab, to reduce the risk of harmful misinformation in a global election year.
OpenAi’s Cautious Approach to Deploying the New Tool
There is extreme caution against providing Voice Engine to the general public for fear of the consequences of voice cloning in illegal matters
Meanwhile, OpenAI stressed in its statement that it is adopting an extremely cautious approach before deploying the new tool more widely due to the potential for misuse of synthetic audio.
The Voice Engine was first developed in 2022, when the company launched an initial version of the Text-To-Speech feature included in ChatGPT. But the real capabilities of the new tool have not been revealed yet. This is partly due to the cautious and informed approach that OpenAI is taking to launch more widely.
OpenAI said that considering the results of the small-scale tests being conducted, the company will make a more informed decision about whether and how to deploy this technology on a large scale.
Real-Life Applications of Voice Engine Can Be Risky
In its post, the company shared examples of real-world uses of the new tool from various partners who were given access to it to develop their applications and products, such as education technology company Age of Learning’s experiment with creating written voice comments.
While AI visual storytelling app HeyGen provides users with the ability to create subtitles for recorded content while preserving the original speaker’s voice.
In its statement, OpenAI expressed its aspirations towards implementing some of the most effective measures in the world, such as phasing out voice authentication as a security measure for access to bank accounts and other sensitive information.
The Statement Added:
We have adopted a range of security measures, including watermarking so we can trace the origin of every sound created by the new tool, as well as proactive monitoring of its use.
Emphasizing that the partners testing Voice Engine have agreed to rules that require, for example, explicit consent from any person before using their voice, and the need to clearly indicate to listeners that the voice was created by artificial intelligence.
Now that artificial intelligence can penetrate all the details of our lives, and even our voices may be vulnerable to simulation, without us realizing it, will modern technologies succeed in completely controlling the future of future generations? Share your opinion with us.
Source: The Guardian