Alibaba Upgrades Quark Into a Next-Gen AI Assistant

As stated by Bloomberg, Quark’s app has undergone an update that incorporates the latest Qwen neural network, enhancing its functionalities. Quark was first introduced in 2016 as a web browser, but now it has undergone a transformation into an AI assistant that integrates sophisticated chatbot-like conversational skills and independent reasoning and task completion into one easy-to-use application.

The “new Quark” is designed to be a versatile tool, capable of tackling a wide range of tasks with remarkable efficiency. From generating high-quality images and drafting detailed articles to planning personalized travel itineraries and creating concise meeting minutes, Quark is poised to become an indispensable companion for its users. This transformation reflects Alibaba’s ambition to integrate artificial intelligence more deeply into everyday life, offering a glimpse into the future of smart, intuitive technology.

Wu Jia, CEO of Quark and Vice President of Alibaba, emphasized the app’s potential to unlock new horizons for its users. “As our model continues to evolve, we see Quark as a gateway to boundless opportunities,” Wu stated. “With the power of artificial intelligence, users can explore and accomplish virtually anything they set their minds to.

Quark’s journey began nearly a decade ago as a simple web browser, but it has since grown into a powerhouse with a reported user base of 200 million people across China. This impressive milestone underscores Alibaba’s ability to scale and adapt its offerings to meet the demands of a rapidly changing digital landscape.

The revamped Quark builds on Alibaba’s recent advancements in AI technology, including the introduction of the QwQ-32 model in March 2025, a reasoning-focused AI designed to enhance problem-solving and decision-making capabilities. By integrating the Qwen neural network, Quark now stands at the forefront of Alibaba’s AI ecosystem, blending innovation with practicality to cater to both individual and professional needs.

This strategic overhaul positions Quark as more than just an app—it’s a visionary tool that could redefine how users interact with technology, solidifying Alibaba’s role as a global leader in AI-driven solutions. As the company continues to refine its models, Quark promises to deliver an ever-expanding array of features, making it a dynamic platform for creativity, productivity, and exploration.

Event-Driven Architectures for AI Applications: Patterns & Use Cases

The landscape around Artificial Intelligence (AI) is always changing, which increases the demand for flexible, scalable, and real-time systems. During the development of AI applications, the Event Driven Architecture (EDA) approach enables flexible responsiveness to optimisation needs at a structural level. This note accompanying ExploreStack’s editorial calendar attempts to capture the essence, structure, and patterns as well as cases and other aspects of EDA in relation to AI, with particular focus placed on defining boundaries for technical managers and practitioners.

Exploring Event-Driven Architecture

In comparison to other software constructions, EDA – event-driven architecture – stands out as it allows various applications to respond to events in real time while also enhancing scalability and coupling. An event can be anything that is of importance like a user changing data, activating an element, or changing some sort of system information that states and needs feedback. Unlike the traditional request-and-response architecture, EDA allows for asynchronous communications where individual components can publish and subscribe to independent events that are happening. This is particularly important in AI applications that tend to work with huge quantities and need to process them in a timely manner so inferences and actions can be provided on time.  

The AI application’s relevance mostly comes from the fact that EDA is able to respond to highly responsive data workloads. For instance, AI models may be required to process data on stream, take action regarding predictive actions, or inject themselves with new sets of information cyclically. Because of how EDA is built, with decoupling of components, guarantees flexibility, responsiveness in real time, and the ability to scale, all essential for today’s modern AI systems, makes it ideal.

Key Patterns in AI Event-Driven Applications

Research and industry practices have defined several patterns within Event Driven Architecture (EDA) that are particularly useful for AI applications. These patterns solve certain problems, augment the efficiency of AI systems and improve their effectiveness:

  1. Asynchronous Inference  
    • Most AI models, especially image generation models or those that rely on Large Language Models (LLMs), require a great deal of computation and take an equally lengthy time to complete. This is compounded in synchronous systems where there is little to no user interaction. EDA solves this problem by enabling applications to publish inference requests as events that are taken care of by other components. These components can range from workers to serverless functions, which perform the task and publish results back as events, notifying the application when they’re finished. Such systems are more responsive, use resources better, and can manage much higher levels of concurrency, as witnessed in Stable Diffusion applications where asynchronous inference decreases idle time during peak demand periods.
  2. Real-time Data Updates
    • AI models are only as effective as the data they are trained on, and in many applications, data is dynamic, requiring periodic updates or retraining. Events can trigger these updates automatically when new data arrives or when specific conditions are met, such as a threshold number of new records. This ensures the model remains relevant and accurate over time without manual intervention. For example, in conversational search systems, scheduled tasks and workflows configured via EDA ensure timely and accurate data updates in knowledge bases, leveraging event-driven advantages for enhanced user experience.
  3. Event-Triggered Actions
    • AI can analyse events to detect patterns, anomalies, or predictions and trigger further actions within the system. For instance, user behavior events can lead to personalised recommendations, while fraud detection events can initiate alerts or block transactions. This pattern enables proactive and personalised interactions, enhancing user engagement and system efficiency. It is particularly useful in scenarios where immediate action is required, such as in financial systems where real-time fraud detection is critical.
  4. Decoupling Components
    • Complex AI systems often comprise multiple components, such as data ingestion, preprocessing, model training, and prediction, which need to work together but can be managed independently. EDA facilitates this decoupling by using events as the means of communication, allowing each component to operate separately. This modularity makes it easier to scale, maintain, and update individual parts without affecting the entire system, enhancing overall system resilience and flexibility. This pattern is evident in microservices architectures, where AI components can scale independently based on demand.

Use Cases and Practical Applications

EDA’s application in AI is demonstrated through various use cases, each addressing specific business needs and leveraging the patterns discussed. These use cases highlight how EDA can transform AI applications, improving performance and user experience:

  1. Chatbots and Virtual Assistants
    • In this scenario, user messages are treated as events that trigger natural language processing (NLP) analysis. Based on the intent and entities extracted, further events are generated to fetch data from databases, call external APIs, or perform other actions. Responses are then formatted and sent back to the user via events, enabling efficient handling of concurrent queries and seamless integration with various services. This approach is crucial for maintaining real-time interactions, as seen in AI chatbots that use message queues for efficient information transmission, enhancing user loyalty through proactive, human-like communications.
  2. Recommendation Systems
    • Recommendation systems rely on user interactions, such as clicks, purchases, or ratings, to provide personalized suggestions. These interactions generate events that update user profiles in real-time, triggering the recommendation engine to recalculate and update recommendations. This ensures that suggestions are always based on the latest behavior, enhancing personalization and relevance. For example, e-commerce platforms use EDA to deliver up-to-date product recommendations, improving customer satisfaction and conversion rates.
  3. Fraud Detection
    • In financial institutions, each transaction is an event analyzed by an AI model trained to detect patterns indicative of fraud. If the model identifies a suspicious transaction, it publishes an event to trigger further investigation or block the transaction, enabling real-time detection and response. This use case is critical for reducing financial losses and improving security, with EDA facilitating immediate action based on AI insights.
  4. Predictive Maintenance
    • In IoT applications, sensor data from machinery is streamed as events into the system. These events are processed by an AI model that predicts the likelihood of equipment failure. If the prediction indicates a high risk, an event is published to notify maintenance personnel or automatically schedule maintenance tasks, reducing downtime and optimizing maintenance schedules. This is particularly valuable in manufacturing, where EDA ensures timely interventions based on AI predictions.
  5. Personalised Marketing
    • Customer interactions, such as visiting certain pages or clicking on ads, generate events that build customer profiles. AI models analyze these profiles to determine the most effective marketing messages for each customer. When a customer meets specific criteria, such as not making a purchase in a while, an event triggers the sending of a personalized message, improving engagement and conversion rates. This use case demonstrates how EDA can enhance customer experiences through targeted communications.

An interesting observation is how EDA supports personalised marketing, an unexpected application where customer behaviour events trigger tailored messages, boosting engagement in ways not immediately obvious from traditional AI use cases.

Implementation Considerations

When implementing EDA for AI applications, several key considerations ensure the system’s effectiveness and reliability:

  • Choosing the Right Event Broker: Select a robust event broker capable of handling the volume and variety of events, such as Apache Kafka, RabbitMQ, Amazon EventBridge, or Google Cloud Pub/Sub. The choice depends on factors like scalability, latency, and integration with existing systems.
  • Designing Events and Event Schemas: Define clear and consistent event schemas to ensure all components understand the structure and meaning of the events, including event type, payload, and metadata. This is crucial for maintaining interoperability and avoiding errors in event processing.
  • Handling Failures and Retries: Implement mechanisms to handle event processing failures, such as retries with exponential backoff, dead-letter queues for unprocessed events, or alerting systems for manual intervention. This ensures system resilience, especially in high-volume AI applications.
  • Monitoring and Debugging: Use monitoring tools to track event production, consumption, and processing times, identifying bottlenecks and ensuring system performance. Tools like Application Real-Time Monitoring Service (ARMS) at Alibaba Cloud ARMS can be instrumental for long-term operations and maintenance.
  • Security and Compliance: Ensure the event-driven system adheres to security best practices, such as encryption of event data, access controls, and compliance with relevant regulations like GDPR or HIPAA, to protect sensitive AI data and maintain trust.

Comparative Analysis: Challenges and Solutions

To further illustrate, consider the following table comparing challenges in AI applications and how EDA addresses them, based on insights from industry practices:

To further illustrate, consider the following table comparing challenges in AI applications and how EDA addresses them, based on insights from industry practices:This table, derived from Alibaba Cloud Community, highlights how EDA tackles specific AI challenges, reinforcing its suitability for these applications.

Conclusion and Future Outlook

EDA provides a flexible and scalable framework that is particularly well-suited for AI applications. By leveraging patterns such as asynchronous inference, real-time data updates, event-triggered actions, and component decoupling, organizations can build AI systems that are responsive, efficient, and adaptable to changing requirements. The use cases, from chatbots to predictive maintenance, demonstrate practical applications that enhance business outcomes and user experiences.

Looking forward, as AI continues to advance and integrate more deeply into various aspects of business and society, the importance of robust, event-driven architectures will only grow. Technical leaders, particularly CTOs, can position their organizations at the forefront of this evolution by adopting EDA, delivering innovative and high-impact AI solutions that meet the demands of a dynamic digital landscape.

RISC-V Chip Adoption Driven by a Strategic Policy Set to Launch by China’s 2025

In a landmark move poised to reshape its technological landscape, China is gearing up to launch its inaugural national policy championing the adoption of RISC-V chips. This strategic initiative, slated for release as early as March 2025, marks a significant step in the country’s quest to pivot away from Western-dominated semiconductor technologies and bolster its homegrown innovation amid escalating global tensions.

Insiders familiar with the development reveal that the policy has been meticulously crafted through a collaborative effort involving eight key government entities. Among them are heavyweights like the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Science and Technology, and the China National Intellectual Property Administration. Together, these bodies aim to cement RISC-V’s role as a cornerstone of China’s burgeoning tech ecosystem, fostering an environment ripe for domestic chip development and deployment.

The mere whisper of this policy has already sent ripples through the financial markets, igniting a wave of optimism among investors. On the day of the leak, Chinese semiconductor stocks staged an impressive rally. The CSI All-Share Semiconductor Products and Equipment Index, which had been languishing earlier, reversed course to surge by as much as 2.5%. Standout performers included VeriSilicon, which hit its daily trading cap with a 10% spike, alongside ASR Microelectronics, Shanghai Anlogic Infotech, and 3Peak, whose shares soared between 8.6% and an eye-catching 15.4% in afternoon trading.

At the heart of this policy push lies RISC-V, an open-source chip architecture that’s steadily carving out a global niche as a versatile, cost-effective rival to proprietary giants like Intel’s x86 and Arm Holdings’ microprocessor designs. Unlike its high-powered counterparts, RISC-V is often deployed in less demanding applications—think smartphones, IoT devices, and even AI servers—making it a pragmatic choice for a wide swath of industries. In China, its allure is twofold: slashed development costs and, critically, its freedom from reliance on U.S.-based firms, a factor that’s taken on heightened urgency amid trade restrictions and geopolitical friction.

Until now, RISC-V’s rise in China has been organic, driven by market forces rather than official mandates. This forthcoming policy changes the game, thrusting the architecture into the spotlight as a linchpin of Beijing’s broader campaign to achieve technological self-sufficiency. The timing is no coincidence—U.S.-China relations remain strained, with American policymakers sounding alarms over China’s growing leverage in the RISC-V space. Some U.S. lawmakers have even pushed to curb American companies’ contributions to the open-source platform, fearing it could turbocharge China’s semiconductor ambitions.

China’s RISC-V ecosystem is already buzzing with activity, spearheaded by homegrown innovators like Alibaba’s XuanTie division and rising star Nuclei System Technology, both of which have rolled out commercially viable RISC-V processors. The architecture’s flexibility is proving especially attractive in the AI sector, where models like DeepSeek thrive on efficient, lower-end chips. For smaller firms chasing affordable AI solutions, RISC-V offers a tantalizing blend of performance and price—a trend that could gain serious momentum under the new policy.

Sun Haitao, a manager at China Mobile System Integration, underscored the pragmatic appeal of RISC-V in a recent statement. “Even if these chips deliver just 30% of the performance of top-tier processors from NVIDIA or Huawei,” he noted, “their cost-effectiveness becomes undeniable when you scale them across multiple units.” This scalability could prove transformative for industries looking to maximize output without breaking the bank.

As China prepares to roll out this groundbreaking policy, the global tech community is watching closely. For Beijing, it’s a calculated gambit to secure its place at the forefront of the semiconductor race—one that could redefine the balance of power in a world increasingly divided by technology.

Wallets of darknet marketplace Nemesis hit by US sanctions

The US Treasury Department’s Office of Foreign Assets Control (OFAC) has added 44 Bitcoin and five Monero addresses associated with the closed darknet marketplace Nemesis Market to the SDN.

The press release says they were controlled by Iranian citizen Behrouz Parsarad, who was allegedly the platform’s administrator.

On March 20, 2024, BKA seized Nemesis Market infrastructure in Germany and Lithuania, disrupting its operations. In the process, police confiscated digital assets worth €94,000.

The investigation began in October 2022.

The platform, created in 2021, sold drugs, stolen data and credit cards, as well as cybercriminal services, including ransomware, phishing, and DDoS.

Before the shutdown, Nemesis had an active audience of 30,000 users who carried out ~$30 million in drug transactions.

Parsarad received millions of dollars in commissions from the transactions and facilitated the laundering of digital assets, according to OFAC.

The administrator remains at large. According to the agency, Parsarad may have “discussed the creation of a new darknet market” with former suppliers.

Recall that in April 2022, German police confiscated the servers of the darknet marketplace Hydra and seized 543 BTC, and the US Treasury imposed sanctions on the platform.

That same month, an American court indicted Russian Dmitry Pavlov in absentia for administering Hydra, providing it with hosting services, conspiring to launder money, and distributing drugs. At the same time, the Meshchansky District Court of Moscow arrested Pavlov on another charge.

In December 2024, the Moscow Regional Court sentenced Hydra founder Stanislav Moiseev to life imprisonment and a fine of 4 million rubles.

Opinion: AI Will Never Gain Consciousness

Artificial intelligence will never become a conscious being due to the lack of aspirations that are inherent in humans and other biological species. This statement was made by Sandeep Naiwal, co-founder of Polygon and the AI ​​company Sentient, in a conversation with Сointelegraph.

The expert does not believe that the end of the world is possible due to artificial intelligence gaining consciousness and seizing power over humanity.

Nailwal was critical of the theory according to which intelligence arises accidentally as a result of complex chemical interactions or processes. Although they can lead to the emergence of complex cells, there is no talk of the emergence of consciousness, the entrepreneur noted.

The co-founder of Polygon also expressed concerns about the risks of surveillance of people and the restriction of freedoms by centralized institutions with the help of artificial intelligence. Therefore, AI should be transparent and democratic, he believes.

[…] Ultimately, global AI, which can create a world without borders, must be controlled by every person,” Nailwal emphasized.

He added that everyone should have a personal artificial intelligence that is loyal and protects against the neural networks of influential corporations.

Recall that in January, Simon Kim, CEO of the crypto venture fund Hashed, expressed confidence that the future of artificial intelligence depends on a radical shift: opening the “black box” of centralized models and creating a decentralized, transparent ecosystem on the blockchain.

Agentic AI: Pioneering Autonomy and Transforming Business Landscapes

(um) let’s make tech work for us

The new autonomous systems labelled AI agents represent the latest evolution of AI technology and mark a new era in business. AI agents, in contrast to traditional models of AI that simply follow commands given to them and emit outputs in a specific format, work with a certain level of freedom. According to Google, these agents are capable of functioning on their own, without needing supervision from a human all of the time. The World Economic Forum describes them as systems that have sensors to see and effectors to interact with the environment. AI agents are expected to transform industries as they evolve from rigid, rule-based frameworks to sophisticated models adept at intricate decision-making . With unprecedented autonomy comes equally unprecedented responsibility. The additional benefits agentic AI technology brings is accompanied by unique challenges that invite careful consideration, planning, governance, and foresight.

The Mechanics of AI Agents: A Deeper Dive

Traditional AI tools, such as Generative AI (GenAI) or predictive analytics platforms, rely on predefined instructions or prompts to deliver results. In contrast, AI agents exhibit dynamic adaptability, responding to real-time data and executing multifaceted tasks with minimal oversight. Their functionality hinges on a trio of essential components:

  • Foundational AI Model: At the heart of an AI agent lies a powerful large language model (LLM), such as GPT-4, LLama, or Gemini, which provides the computational intelligence needed for understanding and generating responses.
  • Orchestration Layer: This layer serves as the agent’s “brain,” managing reasoning, planning, and task execution. It employs advanced frameworks like ReAct (Reasoning and Acting) or Chain-of-Thought prompting, enabling the agent to decompose complex problems into logical steps, evaluate outcomes, and adjust strategies dynamically—mimicking human problem-solving processes.
  • External Interaction Tools: These tools empower agents to engage with the outside world, bridging the gap between digital intelligence and practical application. They include:
    • Extensions: Enable direct interaction with APIs and services, allowing agents to retrieve live data (e.g., weather updates or stock prices) or perform actions like sending emails.
    • Functions: Offer a structured mechanism for agents to propose actions executed on the client side, giving developers fine-tuned control over outputs.
    • Data Stores: Provide access to current, external information beyond the agent’s initial training dataset, enhancing decision-making accuracy.

This architecture transforms AI agents into versatile systems capable of navigating real-world complexities with remarkable autonomy.

Multi-Agent Systems: The Newest Frontier

The Multi-Agent System (MAS) market is in for tremendous growth – Mckinsey gold, with a staggering predicted growth rate of nearly 28% by 2030. Recently Bloomberg predicted AI breakthroughs will soon give rise to multi-agent systems, collaborative networks of AI agents working collaboratively towards ambitious objectives. These systems promise scalability, surpassing the benefits of single operating agents.

Artificial imagination translates to a smart city. Imagine multiple AI agents working alongside each other. One controlling the signals, another managing traffic directing units, and an extra aiding with alert responder rerouting. And all of this is happening in real time!

Governance is key here to prevent a systemic failure, restricting conflicting commands that can stipulate paralysis or possibly force dysfunctions.Guaranteeing multi-agent systems possibility should be able to provide semi-standardized freedom, however, the benefits are the only ensured protocols alongside need need to be prescribed to.

Opportunities and Challenges

The potential of AI agents is game changing, but their independence creates grave concerns. The World Economic Forum highlights some challenges that companies must deal with: 

  •  Risks Associated with Autonomy: Ensuring safety and reliability becomes all the more difficult as agents become more independent. For example, an unmonitored agent could execute resource allocation that would trigger operational failures with cascading effects. 
  • Lack of Accountability: As trust is already fragile due to the opaque reasoning of ‘black box’ behavior, it becomes even more crucial within high risk healthcare or finance situations. Ensuring transparency and accountability becomes non negotiable.
  • Risks Surrounding Privacy and Security: A lot of sensitive information puts trust in jeopardy. An agent functioning effectively only having access to a multitude of sensitive systems and datasets makes one pose the question ‘How do we grant sufficient permissions without compromising security?’ Strong policies are needed to enforce standards to protect sensitivity and privacy while preventing breaches.

Some of these risks require guarding by taking proactive measures like consistent monitoring, adhering to ethical AI principles, human-in-the-loop oversight for garnering vital AI decision-making frameworks to retain control. Organizations need to deploy auditing tools that monitor and alter agent paths during deviations to regain control and maintain organizational goals.

The Human-AI Partnership

Even though AI agents have an independent function, their purpose is not to replace human reasoning but rather to augment it. How the EU AI Act works reminds us of the necessity of human intervention in processes like security or legal compliance which are sensitive. The best situation is one where both humans and machines work together: agents perform the monotonous and repetitive work that requires processing large amounts of data—this enables humans to be more strategic, creative, and ethical.

In a logistics company, for instance, an AI agent may be able to optimize the delivery routes using traffic information autonomously, and a manager can use their judgment and approve the AI’s plan using customer preferences or other unforeseen factors. This enables human control and supervision to be maintained while efficiency is also enhanced.

Guidelines for Implementing Agentic AI Strategically

Both Google Analytics and the World Economic Forum are integrated around a central idea. The responsible use of AI agents can result in outstanding value creation and unparalleled innovation. To reap value with manageable risks, businesses need to employ the following practices:  

  • Develop Skills: Prepare workforce on the building, implementation, and administration of AI agents to ensure the effective application of AI technology.  
  • AI Ethics: Develop appropriate business governance frameworks that adhere to the international benchmarks, the EU AI Act for instance, requiring fair and accountable operations of the agents.  
  • Ethics Boundaries: Delegated agent discretion must come with boundary safeguards to eliminate boundary overreach or lateral decision making through establishing unique controls.  
  • Validation Check: Enable behavioral modification to organizational needs through active auditing of the agent, stress testing em, and refining organizational value objectives.


Final thoughts

The integration of reasoning and planning gives Agent AI the ability to act on its own, AI agents mark a pivotal leap in the evolution of artificial intelligence. Their potential to change industries like personalized healthcare or smart cities is phenomenal but using AI carelessly is a grave mistake. For AI programs to be dependable companions, trust and security must anchor their development.

Organizations that find the right balance to enable agents to innovate while maintaining human supervision would be the ones leading the charge in this technological revolution. Agentic AI transforms it from an ordinary business tool to a paradigm shift re-imagining autonomy. Such a future is bound to belong to those who embrace its potential with clarity and caution.

The Billion-Dollar Heist: How Bybit Survived the Largest Crypto Hack in History

On February 21, the cryptocurrency world was shaken when Bybit, one of the largest Bitcoin exchanges, fell victim to a staggering $1.5 billion hack – marking it as the biggest cyber heist in crypto history. Despite the massive breach, the platform continued operating, thanks in part to swift crisis management and the backing of industry heavyweights.

How the Hack Unfolded

On February 21, on-chain detective ZachXBT reported suspicious ETH outflows from Bybit. We are talking about 499,395 ETH (about $1.46 billion at the time). The assumptions about the hack were confirmed by the company’s CEO Ben Zhou, and his employees almost immediately published a statement according to which the incident occurred when transferring ETH from cold multisig storage to a hot wallet.

The attackers replaced the transaction signing interface so that all participants in the procedure saw the correct address. At the same time, the logic of the smart contract was changed, and the hackers gained control of the ETH wallet and withdrew all the funds.

Zhou hastened to reassure clients and emphasized that the platform remains solvent and continues to process withdrawal requests, albeit with a delay: within about 10 hours after the hack, the exchange recorded a record number of withdrawal requests – more than 350,000. At that time, about 2,100 requests remained pending, while 99.994% of transactions were completed.

Nevertheless, the platform’s CEO still asked partners to provide a loan in ETH – the funds were needed to cover liquidity during the crisis period. As a result, more than 10 companies supported the exchange.

Huobi co-founder Du Jun contributed 10,000 ETH and promised not to withdraw it for a month. The co-founders of Conflux and Mask Network also announced the deposit of Ether to the exchange’s cold wallets. Coinbase Head of Product Conor Grogan wrote that Binance and Bitget sent >50,000 ETH there too.

According to reporter Colin Wu, 12,652 stETH (around $33.75 million) were transferred from MEXC to Bybit’s cold wallet.

The ETH price responded to the Bybit hack by falling to $2,625 (Binance), but recovered fairly quickly. By the evening of February 23, the quotes momentarily exceeded $2,850, after which they corrected to $2,690 (as of February 24).

Bybit representatives said that information about the incident has been “reported to the relevant authorities.” In addition, cooperation with on-chain analytics providers has allowed them to identify and isolate the associated addresses, limiting the attackers’ ability to “withdraw ETH through legitimate markets.”

As of February 24, Bybit has fully restored its Ethereum reserves (~444,870 ETH).

Who Was Behind the Attack?

According to ZachXBT, unknown individuals quickly exchanged some of the stolen mETH and stETH tokens for ETH via decentralized exchanges. 10,000 ETH were divided between 36 wallets.

The founder of DeFi Llama, 0xngmi, noted that the methods in this attack are similar to the incident with the Indian exchange WazirX in July 2024. At that time, Elliptic analysts concluded that North Korean hackers were behind the attack.

0xngmi’s assumption was confirmed by Arkham Intelligence. According to them, on the day of the Bybit hack, ZachXBT investigator “provided irrefutable evidence of Lazarus Group’s involvement in the hack”:

Its analysis contains a detailed analysis of test transactions and associated wallets used before the attack, as well as a number of graphs and timestamps. This data has been transferred to the exchange team to assist with the investigation.”

The founder of the AML service BitOK and crypto investor Dmitry Machikhin noted that the stolen cryptocurrency is actively being withdrawn from the Ethereum network to other blockchains. According to his observations, immediately after the hack, the assets were distributed to 48 different addresses.

At the second stage:

  • crypto assets from these addresses were gradually split into even smaller parts (50 ETH each);
  • funds were sent through bridges (eXch and Chainflip) to other networks.

The image shows how one of the 48 addresses splits the transactions into 50 ETH and goes to Chainflip.

According to Taproot Wizards co-founder Eric Wall, the North Korean hackers are likely to convert all ERC-20 tokens to ETH, then exchange the resulting ETH for BTC, and then gradually transfer the bitcoins to yuan through Asian exchanges. In his opinion, the process could take years.

ZachXBT reported that Lazarus transferred 5,000 ETH to a new address and began laundering the funds through the centralized mixer eXch, and then transferred them to bitcoin through Chainflip. The latter said that they have recorded attempts by the attackers to withdraw the stolen funds from Bybit in bitcoin through their platform. They disabled some front-end services, but it is impossible to completely stop the protocol, given its decentralized structure with 150 nodes.

The mETH Protocol team reported that they blocked the withdrawal of 15,000 cmETH (~$43.5 million) and redirected the assets from the attacker’s address to a recovery account. Tether CEO Paolo Ardoino said that the company froze 181,000 USDT related to the attack.

In a comment to ForkLog, Bitget CEO Gracie Chen emphasized that “the exchange’s systems have already blacklisted the attackers’ wallets.”

As of February 23, the attackers had exchanged 37,900 ETH (about $106 million) for bitcoin and other assets through Chainflip, THORChain, LiFi, DLN, and eXch. The hackers’ address still had 461,491 ETH of the 499,395 ETH stolen.

What to do?

After the hack, some community members started talking about rolling back the state of the Ethereum network to return the stolen funds. Thus, former BitMEX CEO Arthur Hayes noted that as an investor with large ETH reserves, he would support the community’s decision in the event of a chain rollback to an earlier state – as after the hack of The DAO in 2016.

Bitcoin maximalist Samson Mow also spoke out in support of restoring the blockchain, but leading Ethereum developer Tim Beiko criticized the idea. According to him, the Bybit incident involved an incorrect presentation of transaction data in the hacked interface, and not technical problems.

In addition, after the hack, the funds quickly spread across the complex ecosystem of the second-largest cryptocurrency by capitalization. “Rolling back” the network would mean canceling many legitimate transactions, some of which are related to actions outside the Ethereum network. The Vice President of Yuga Labs, nicknamed Quit, also drew attention to this. He added that many ordinary users would lose money, and the accounting systems of large players like Circle and Tether would collapse.

What’s the bottom line

The Bybit hack turned out to be the largest in the crypto industry so far. However, the head of Bitget did not find any reason to panic: according to her, the losses are equivalent to Bybit’s annual profit ($1.5 billion), and clients’ funds are completely safe.

The incident did not affect market sentiment either. According to Glassnode, the implied volatility of the first cryptocurrency is close to record lows. Price fluctuations against the backdrop of the hacker attack decreased after Strategy founder Michael Saylor published a chart of the company’s coin purchases.

This time, there was no platform crash or market panic, and a quick response and community participation helped restore liquidity and partially block the stolen assets. However, the incident highlighted a persistent problem – even large centralized platforms are still susceptible to attacks and vulnerable to hackers.

Grok Names Elon Musk as the Main Disinformer

Elon Musk is the main disseminator of disinformation in X, according to the AI ​​assistant Grok from the entrepreneur’s startup xAI, integrated into his social network.

The billionaire has a huge audience and often spreads false information on various topics, the chatbot claims. Among other disinformers, according to the neural network: Donald Trump, Robert F. Kennedy Jr., Alex Jones and RT (Russian television).

Trump shares false claims about the election, Kennedy Jr. – about vaccines, and Alex Jones is known for spreading conspiracy theories. Russian television lies about political issues, Grok added.

Grok’s Top Disseminators of Disinformation. Data: X.

The chatbot cited Rolling Stone, The Guardian, NPR, and NewsGuard as sources of information.

The selection process involved analyzing multiple sources, including academic research, fact-checking organizations, and media reports, to identify those with significant influence and a history of spreading false or misleading information,” the AI ​​noted.

The criteria for compiling the rankings included the volume of false information spread, the number of followers, and mentions in credible reports.

When asked for clarification, Grok noted that the findings may be biased because the sources provided are mostly related to the funding or opinions of Democrats and liberals.

Recall that in January, artificial intelligence was used to spread fake news about the fires in Southern California.

A similar situation arose after Hurricane Helene.

Google Unveils Memory Feature for Gemini AI Chatbot

Google has launched a notable update to its Gemini AI chatbot, equipping it with the ability to remember details from previous conversations, a development experts are calling a major advancement.

In a blog post released on Thursday, Google detailed how this new capability allows Gemini to store information from earlier chats, provide summaries of past discussions, and craft responses tailored to what it has learned over time.

This upgrade eliminates the need for users to restate information they’ve already provided or sift through old messages to retrieve details. By drawing on prior interactions, Gemini can now deliver answers that are more relevant, cohesive, and enriched with additional context pulled from its memory. This results in smoother, more personalized exchanges that feel less fragmented and more like a continuous dialogue.

Rollout Plans and Broader Access
The memory feature is first being introduced to English-speaking users subscribed to Google One AI Premium, a $20 monthly plan offering enhanced AI tools. Google plans to extend this functionality to more languages in the near future and will soon bring it to business users via Google Workspace Business and Enterprise plans.

Tackling Privacy and User Control
While the ability to recall conversations offers convenience, it may raise eyebrows among those concerned about data privacy. To address this, Google has built in several options for users to oversee their chat data. Through the “My Activity” section in Gemini, individuals can view their stored conversations, remove specific entries, or decide how long data is kept. For those who prefer not to use the feature at all, it can be fully turned off, giving users complete authority over what the AI retains.

Google has also made it clear that it won’t use these stored chats to refine its AI models, putting to rest worries about data being repurposed.

The Race to Enhance AI Memory

Google isn’t alone in its efforts to boost chatbot memory. OpenAI’s Sam Altman has highlighted that better recall is a top demand from ChatGPT users. Over the last year, both companies have rolled out features letting their AIs remember things like a user’s favorite travel options, food preferences, or even their preferred tone of address. Until now, though, these memory tools have been fairly limited and didn’t automatically preserve entire conversation histories.

Gemini’s new recall ability marks a leap toward more fluid and insightful AI exchanges. By keeping track of past talks, it lets users pick up where they left off without losing the thread, proving especially handy for long-term tasks or recurring questions.

As this feature spreads to more users, Google underscores its commitment to transparency and control, ensuring people can easily manage, erase, or opt out of data retention altogether.

Sam Altman talks about the features of GPT-4.5 and GPT-5

OpenAI CEO Sam Altman shared the startup’s plans to release GPT-4.5 and GPT-5 models. The company aims to simplify its product offerings by making them more intuitive for users.

Altman acknowledged that the current product line has become too complex, and OpenAI is looking to change that.

We hate model selection as much as you do and want to get back to magical unified intelligence,” he wrote.

GPT-4.5, codenamed Orion, will be the startup’s last AI model without a “chain of reasoning” mechanism. The next step is to move toward more integrated solutions.

The company plans to combine the o and GPT series models, creating systems capable of:

  • using all available tools;
  • independently determining when deep thinking is needed and when an instant solution is enough;
  • adapting to a wide range of tasks.

GPT-5 integrates various technologies, including o3. Other innovations will include canvas capabilities (Canvas-mode), search, deep research (Deep Research) and much more.

Free GPT-5 subscribers will get unlimited access to the model’s tools on standard settings. Plus and Pro account holders will be able to use advanced features with a higher level of intelligence.

Regarding the release dates of GPT-4.5 and GPT-5, Altman wrote in the comments to the tweet about “weeks” and “months“, respectively.

According to Elon Musk, ChatGPT’s competitor, the Grok 3 chatbot, is in the final stages of development and will be released in one to two weeks. Reuters writes about this.

Grok 3 has very powerful reasoning capabilities, so in the tests we’ve done so far, Grok 3 outperforms all the models that we know of, so that’s a good sign,” the entrepreneur said during a speech at the World Summit of Governments in Dubai.

Recall that Altman turned down Musk and a group of investors’ bid to buy the non-profit that controls OpenAI for $97.4 billion. The startup’s CEO admitted that this was an attempt to “slow down” the competing project.