top of page
The Communiqué News

Pritish Bagdi

Pi Network Launches Open Mainnet, Pi Coin Sees Major Price Volatility

The Pi Network officially transitioned to its Open Mainnet on February 20, 2025, marking a major milestone from its closed ecosystem to full decentralization. This long-anticipated move has sent shockwaves through the cryptocurrency market, triggering sharp price fluctuations in Pi Coin.

Within hours of the launch, Pi Coin surged to a peak of $1.97, fueled by investor excitement and heightened trading activity. However, the rapid rally was short-lived as the price plunged to $0.737, reflecting the market's volatility. In a surprising rebound, Pi Coin recovered by nearly 80%, stabilizing around $1.29.

The Open Mainnet launch signifies Pi Network’s commitment to decentralization, granting users unrestricted access to peer-to-peer transactions, decentralized applications (dApps), and enhanced blockchain functionalities. This pivotal shift is expected to drive long-term growth while attracting new developers and investors.

Crypto analysts highlight the importance of market caution during such volatile phases but remain optimistic about Pi Coin’s future. As the network matures, many believe Pi could solidify its position among top-performing cryptocurrencies.

Investors are now watching closely to see how the Open Mainnet will shape Pi Network’s evolving ecosystem and its impact on global crypto markets.





In an effort to keep ahead of industry rivals, Microsoft-backed OpenAI has announced its latest breakthrough, Sora, a cutting-edge text-to-video model.


Pritish Bagdi

This action demonstrates OpenAI's dedication to preserving a competitive edge in the fast-growing field of artificial intelligence (AI) in an era where text-to-video solutions are becoming increasingly popular.


What is Sora?

Sora, which means sky in Japanese, is a text-to-video diffusion model capable of producing minute-long films that are difficult to distinguish from the original.

OpenAI stated in a post on the X platform (formerly Twitter) that "Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions."

According to the manufacturer, the new model can create lifelike films from still photos or user-supplied footage.

"We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction," the post read.

How are you going to attempt it?

The majority of us will have to wait to use the new AI model. Even though the text-to-video model was unveiled by the corporation on February 15, it is now in the red-teaming stage.

Red teaming is the process of simulating real-world use by a group of experts called the "red team" to find flaws and vulnerabilities in the system.

"We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals," the business stated.

Nonetheless, the business posted a number of demonstrations in the blog post, with OpenAI's CEO providing videos of user-requested prompts on X.

How does it operate?

Consider beginning with a loud, static image on a TV and gradually eliminating the fuzziness to reveal a clean, moving video. That's what Sora does. This unique software employs "transformer architecture" to progressively eliminate noise and produce videos.

Not just frames by frames, but complete films can be produced at once by it. Users can direct the video's content by feeding the model text descriptions, such as ensuring that a person remains visible even if they briefly walk off-screen.

Consider GPT models that produce text by word. Similar actions are taken by Sora, but with pictures and movies. Videos are divided into smaller segments known as patches it.

"Sora builds on past research in DALL·E and GPT models. It uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, the model is able to follow the user’s text instructions in the generated video more faithfully," the company said in the blog post.

However, the company has not provided any details on what kind of data the model is trained on.
















San Jose (California) [USA], January 20, 2024: Samsung, the South Korean smartphone manufacturer, debuted its highly anticipated Galaxy S24 Ultra, Galaxy S24+, and Galaxy S24 on Wednesday.


Pritish Bagdi


Interestingly, at the Galaxy Unpacked 2024 event, the tech giant introduced its next flagship smartphones with Galaxy AI, a full set of AI-powered capabilities that boost communication, creativity, searchability, and performance.

According to Mashable, AI-powered features will enable users to dump third-party apps in favor of beneficial tools such as Live Translate, which allows you to translate languages in real time via phone conversations or texts.

Other Galaxy AI features that may aid users are the following:

Chat Assist - Adjusts the tone of your messages to ensure they are suitable for work, social media, or anywhere you desire to share them.

Note Assist - Make summaries of your notes or utilise pre-made templates to urge you to write.

Circle to Search - Circle, highlight, scribble, or tap on anything to receive meaningful, high-quality search results.

Google has cooperated with Samsung on AI for the latter's next flagship products.

Sunder Pichai, Google CEO, revealed the 'Circle to Search' feature on X, adding, "We continue to apply AI to make Search more helpful and intuitive. Building on features like Lens, and Circle to Search is our next key accomplishment, allowing you to search what you see on @Android phones with a single gesture without switching apps. #SamsungUnpacked."

"The Galaxy S24 series transforms our global connectivity and lays the foundation for the next decade of mobile innovation." Galaxy AI is built on our innovation legacy and a detailed understanding of how people use their devices. We're happy to witness how our users around the world use Galaxy AI to enhance their daily lives and open up new possibilities," said TM Roh, President, and Head of Mobile eXperience (MX) Business at Samsung Electronics.

AI features have a big impact on the camera app, particularly in terms of offering potential editing improvements.

This year's new features include the ProVisual Engine and a dedicated ISP Block for noise reduction. The ProVision Engine is a set of AI-powered applications, such as Generative Edit, which allows you to erase things and have AI fill in the space.

It can also fill in the space left by straightening a crooked shot. According to GSMArena, Instant Slow-mo may generate additional frames to smoothly slow down particular points in the video.




bottom of page