OpenAI shuts down robotics team because it doesn’t have enough data yet

HomeTechnology

OpenAI shuts down robotics team because it doesn’t have enough data yet

In brief OpenAI has disbanded its AI robotics team and is no longer trying to apply machine learning to physical machines. Wojciech Zaremba, co-found

Mining bitcoin could be about to get a whole lot easier after China’s crypto crackdown
Vaccine etiquette: A guide to politely navigating this new phase of the pandemic
IFBB Indy Pro 2021 Results

In brief OpenAI has disbanded its AI robotics team and is no longer trying to apply machine learning to physical machines.

Wojciech Zaremba, co-founder of OpenAI, who led the robotics group confirmed that the company recently broke up the team to focus working on more promising areas of artificial general intelligence research.

“Here’s a reveal … as of recently we changed the focus at OpenAI, and I actually disbanded the robotics team,” he said during an episode of the Weights & Biases podcast.

Zaremba said a lack of training data was holding the robotics research back: there wasn’t enough information on hand to teach the systems to the level of intelligence desired.

“From the perspective of what we want to achieve, which is to build AGI, I think there was actually some components missing,” he added. A spokesperson from OpenAI this week confirmed it had, indeed, stopped working on robotics.

You can now download a top AI model for protein prediction

DeepMind released AlphaFold, the most advanced protein-structure-predicting machine-learning model yet, on GitHub, this week.

If you want to play around with it, you’ll need to be familiar with Docker, and have the space to store hundreds of gigabytes of genetic sequencing data as well as the model.

AlphaFold is trained to predict how a protein folds and takes shape given its constituent amino acids. Last year, DeepMind entered its system into the Critical Assessment of Protein Structure Prediction contest, and thrashed its rivals.

DeepMind’s goal is to get the model accurate enough to be useful in developing drugs that can target specific proteins to cure or mitigate diseases. A paper by DeepMind on AlphaFold’s design was published this month in Nature.

In a separate project, a large team of researchers at various universities and academic institutions also published their own open-source AI protein folding model. Known as RoseTTaFold, it doesn’t perform as well as AlphaFold though it’s not too shabby, according to a paper published in Science.

Is it wrong to bring dead people back to life in documentaries without telling the audience?

A New Yorker review of Roadrunner, a documentary about the late and great Anthony Bourdain, has sparked questions over whether it’s ethical or not to use machine-learning technology to low-key fake people’s voices.

In the magazine piece, the documentary’s filmmaker Morgan Neville admitted to using software that mimicked Bourdain’s voice, making the celebrity chef and writer say words he had only written. Specifically, the software was used to read out an email Bourdain had written to a friend. The code was trained on clips of Bourdain speaking on TV, radio, audiobooks, and podcasts.

“If you watch the film … you probably don’t know what the other lines are that were spoken by the AI, and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.”

Should the director tell viewers or listeners when an audio clip has been synthetically generated? Does it matter, seeing as Bourdain did express those sentiments albeit in an email and not into a microphone? Will this blow a hole in trust in future documentaries, journalism, and media output? This Tech Policy Press interview with Sam Gregory – a deep-fakes expert and program director of Witness Media Lab – has more on that.

Discord believes AI can help moderate hate speech online

IRC-for-the-next-generation Discord has snapped up an AI startup for its automated moderation tools.

Sentropy, based in Palo Alto, California, confirmed the deal in a blog post this week. “Three years after starting this company with Michele, Ethan, and Taylor, I’m thrilled to announce that we’re joining Discord to continue fighting against hate and abuse on the internet,” said CEO and co-founder John Redgrave.

The upstart has built proprietary machine-learning models said to be capable of detecting hate speech and toxic language to shut down online harassment. The amount Discord paid to acquire Sentropy’s technology and team was not disclosed.

Discord was known for being primarily popular with gamers, though it has exploded in use in other communities, from programming to cryptocurrencies. It reportedly walked away from a $10bn offer by Microsoft, earlier this year. ®

COMMENTS

WORDPRESS: 0
DISQUS: