Sam Altman Replies To CRED’s Kunal Shah, Creates Video For Him Using New Tool

Sam Altman Replies To CRED's Kunal Shah, Creates Video For Him Using New Tool

Sam Altman replied to Kunal Shah’s prompt for AI video.

OpenAI, the company behind ChatGPT, introduced its first artificial intelligence-powered text-to-video generation model Sora recently. The company claims it can generate up to 60-second-long videos. The development was also shared by the company’s Chief Executive Officer Sam Altman, who took to X (formerly Twitter) and shared short clips made by Sora. 

Mr Altman asked his followers on the platform to “reply with captions for videos you’d like to see and we’ll start making some!” He added that they should not “hold back on the detail or difficulty!” Replying to the same, CRED Founder Kunal Shah stated that he wanted to create an interesting video using animals and the ocean as elements. “A bicycle race on ocean with different animals as athletes riding the bicycles with drone camera view,” he said.

A few hours later, the OpenAI Chief replied to the post with a video. In the clip, one can see whales, penguins and tortoises riding coloured bicycles in the ocean.

Since being shared, the clip has amassed 4.5 million views and 30,000 likes on the platform.

“Interesting and powerful AI,” said a user.

Another said, “This one’s actually the most impressive video so far from a semantics and fidelity standpoint imo”

A third said, “Such a powerful tool and it has already spread the magic all over the world.”

A user added, “noooo the turtle can’t reach the pedals”

“It is unbelievable how fast these AI technologies are advancing… and terrifying because we aren’t ready for the disruptions these will soon create,” stated another person.

Meanwhile, according to OpenAI, Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.” Interestingly, the length of the video it claims to generate is more than ten times of what its rivals offer. 
OpenAI stated on its website, “The current model has weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterwards, the cookie may not have a bite mark.” To ensure the AI tool is not used for creating deepfakes or other harmful content, the company is building tools to help detect misleading content.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *