Apple has recently announced a range of new accessibility tools for the iPhone and iPad, aiming to make their devices more inclusive for individuals with disabilities.
One of these tools is called Personal Voice, which allows users to record their voice for phone calls after a short training session.
By reading text prompts, the technology learns and replicates the user’s voice. Another feature, called Live Speech, uses a synthesized voice to read out the user’s typed text during phone calls, FaceTime conversations, and in-person interactions. Users can also save frequently used phrases for real-time conversations.
These features have been designed with input from disability communities, addressing the needs of people with cognitive, vision, hearing, and mobility disabilities.
Apple’s commitment to accessibility is evident in their efforts to facilitate new ways of connection and support a diverse range of users.
iPhones will soon be able to replicate your own voice after just 15 minutes of training 😳📱 pic.twitter.com/6vHL1NwuGT
— Daily Loud (@DailyLoud) May 21, 2023
While these tools provide valuable benefits, concerns about the misuse of artificial intelligence and deep fakes have been raised.
However, Apple reassures users that the Personal Voice feature uses on-device machine learning, ensuring privacy and data security.