Alibaba Upgrades Quark Into a Next-Gen AI Assistant

As stated by Bloomberg, Quark’s app has undergone an update that incorporates the latest Qwen neural network, enhancing its functionalities. Quark was first introduced in 2016 as a web browser, but now it has undergone a transformation into an AI assistant that integrates sophisticated chatbot-like conversational skills and independent reasoning and task completion into one easy-to-use application.

The “new Quark” is designed to be a versatile tool, capable of tackling a wide range of tasks with remarkable efficiency. From generating high-quality images and drafting detailed articles to planning personalized travel itineraries and creating concise meeting minutes, Quark is poised to become an indispensable companion for its users. This transformation reflects Alibaba’s ambition to integrate artificial intelligence more deeply into everyday life, offering a glimpse into the future of smart, intuitive technology.

Wu Jia, CEO of Quark and Vice President of Alibaba, emphasized the app’s potential to unlock new horizons for its users. “As our model continues to evolve, we see Quark as a gateway to boundless opportunities,” Wu stated. “With the power of artificial intelligence, users can explore and accomplish virtually anything they set their minds to.

Quark’s journey began nearly a decade ago as a simple web browser, but it has since grown into a powerhouse with a reported user base of 200 million people across China. This impressive milestone underscores Alibaba’s ability to scale and adapt its offerings to meet the demands of a rapidly changing digital landscape.

The revamped Quark builds on Alibaba’s recent advancements in AI technology, including the introduction of the QwQ-32 model in March 2025, a reasoning-focused AI designed to enhance problem-solving and decision-making capabilities. By integrating the Qwen neural network, Quark now stands at the forefront of Alibaba’s AI ecosystem, blending innovation with practicality to cater to both individual and professional needs.

This strategic overhaul positions Quark as more than just an app—it’s a visionary tool that could redefine how users interact with technology, solidifying Alibaba’s role as a global leader in AI-driven solutions. As the company continues to refine its models, Quark promises to deliver an ever-expanding array of features, making it a dynamic platform for creativity, productivity, and exploration.

Meta Develops Specialised AI Chip

Meta is testing its own chip for training AI systems, Reuters reports, citing sources.

According to the agency, the new processor is a specialized accelerator aimed at solving specific problems for artificial intelligence. This approach makes the chip more energy efficient compared to integrated graphics processors traditionally used for AI workloads.

The company is collaborating with Taiwan’s TSMC. At the moment, the initial stage of development has been completed, which includes creating and sending prototypes to the chip factory for testing.

Developing its own processors is part of Meta’s plan to reduce infrastructure costs. It is betting on artificial intelligence to ensure growth.

The corporation predicts spending in 2025 in the amount of $114-119 billion, of which $65 billion will be directed to the artificial intelligence sector.

Meta wants to start using its own chips for AI tasks in 2026, Reuters writes.

In May 2023, the company introduced two specialised processors for artificial intelligence and video processing tasks.

Earlier, the media learned about OpenAI working on its own AI processor in partnership with Broadcom and TSMC.

The Chinese company ByteDance is developing a similar product in collaboration with Broadcom.

Event-Driven Architectures for AI Applications: Patterns & Use Cases

The landscape around Artificial Intelligence (AI) is always changing, which increases the demand for flexible, scalable, and real-time systems. During the development of AI applications, the Event Driven Architecture (EDA) approach enables flexible responsiveness to optimisation needs at a structural level. This note accompanying ExploreStack’s editorial calendar attempts to capture the essence, structure, and patterns as well as cases and other aspects of EDA in relation to AI, with particular focus placed on defining boundaries for technical managers and practitioners.

Exploring Event-Driven Architecture

In comparison to other software constructions, EDA – event-driven architecture – stands out as it allows various applications to respond to events in real time while also enhancing scalability and coupling. An event can be anything that is of importance like a user changing data, activating an element, or changing some sort of system information that states and needs feedback. Unlike the traditional request-and-response architecture, EDA allows for asynchronous communications where individual components can publish and subscribe to independent events that are happening. This is particularly important in AI applications that tend to work with huge quantities and need to process them in a timely manner so inferences and actions can be provided on time.  

The AI application’s relevance mostly comes from the fact that EDA is able to respond to highly responsive data workloads. For instance, AI models may be required to process data on stream, take action regarding predictive actions, or inject themselves with new sets of information cyclically. Because of how EDA is built, with decoupling of components, guarantees flexibility, responsiveness in real time, and the ability to scale, all essential for today’s modern AI systems, makes it ideal.

Key Patterns in AI Event-Driven Applications

Research and industry practices have defined several patterns within Event Driven Architecture (EDA) that are particularly useful for AI applications. These patterns solve certain problems, augment the efficiency of AI systems and improve their effectiveness:

  1. Asynchronous Inference  
    • Most AI models, especially image generation models or those that rely on Large Language Models (LLMs), require a great deal of computation and take an equally lengthy time to complete. This is compounded in synchronous systems where there is little to no user interaction. EDA solves this problem by enabling applications to publish inference requests as events that are taken care of by other components. These components can range from workers to serverless functions, which perform the task and publish results back as events, notifying the application when they’re finished. Such systems are more responsive, use resources better, and can manage much higher levels of concurrency, as witnessed in Stable Diffusion applications where asynchronous inference decreases idle time during peak demand periods.
  2. Real-time Data Updates
    • AI models are only as effective as the data they are trained on, and in many applications, data is dynamic, requiring periodic updates or retraining. Events can trigger these updates automatically when new data arrives or when specific conditions are met, such as a threshold number of new records. This ensures the model remains relevant and accurate over time without manual intervention. For example, in conversational search systems, scheduled tasks and workflows configured via EDA ensure timely and accurate data updates in knowledge bases, leveraging event-driven advantages for enhanced user experience.
  3. Event-Triggered Actions
    • AI can analyse events to detect patterns, anomalies, or predictions and trigger further actions within the system. For instance, user behavior events can lead to personalised recommendations, while fraud detection events can initiate alerts or block transactions. This pattern enables proactive and personalised interactions, enhancing user engagement and system efficiency. It is particularly useful in scenarios where immediate action is required, such as in financial systems where real-time fraud detection is critical.
  4. Decoupling Components
    • Complex AI systems often comprise multiple components, such as data ingestion, preprocessing, model training, and prediction, which need to work together but can be managed independently. EDA facilitates this decoupling by using events as the means of communication, allowing each component to operate separately. This modularity makes it easier to scale, maintain, and update individual parts without affecting the entire system, enhancing overall system resilience and flexibility. This pattern is evident in microservices architectures, where AI components can scale independently based on demand.

Use Cases and Practical Applications

EDA’s application in AI is demonstrated through various use cases, each addressing specific business needs and leveraging the patterns discussed. These use cases highlight how EDA can transform AI applications, improving performance and user experience:

  1. Chatbots and Virtual Assistants
    • In this scenario, user messages are treated as events that trigger natural language processing (NLP) analysis. Based on the intent and entities extracted, further events are generated to fetch data from databases, call external APIs, or perform other actions. Responses are then formatted and sent back to the user via events, enabling efficient handling of concurrent queries and seamless integration with various services. This approach is crucial for maintaining real-time interactions, as seen in AI chatbots that use message queues for efficient information transmission, enhancing user loyalty through proactive, human-like communications.
  2. Recommendation Systems
    • Recommendation systems rely on user interactions, such as clicks, purchases, or ratings, to provide personalized suggestions. These interactions generate events that update user profiles in real-time, triggering the recommendation engine to recalculate and update recommendations. This ensures that suggestions are always based on the latest behavior, enhancing personalization and relevance. For example, e-commerce platforms use EDA to deliver up-to-date product recommendations, improving customer satisfaction and conversion rates.
  3. Fraud Detection
    • In financial institutions, each transaction is an event analyzed by an AI model trained to detect patterns indicative of fraud. If the model identifies a suspicious transaction, it publishes an event to trigger further investigation or block the transaction, enabling real-time detection and response. This use case is critical for reducing financial losses and improving security, with EDA facilitating immediate action based on AI insights.
  4. Predictive Maintenance
    • In IoT applications, sensor data from machinery is streamed as events into the system. These events are processed by an AI model that predicts the likelihood of equipment failure. If the prediction indicates a high risk, an event is published to notify maintenance personnel or automatically schedule maintenance tasks, reducing downtime and optimizing maintenance schedules. This is particularly valuable in manufacturing, where EDA ensures timely interventions based on AI predictions.
  5. Personalised Marketing
    • Customer interactions, such as visiting certain pages or clicking on ads, generate events that build customer profiles. AI models analyze these profiles to determine the most effective marketing messages for each customer. When a customer meets specific criteria, such as not making a purchase in a while, an event triggers the sending of a personalized message, improving engagement and conversion rates. This use case demonstrates how EDA can enhance customer experiences through targeted communications.

An interesting observation is how EDA supports personalised marketing, an unexpected application where customer behaviour events trigger tailored messages, boosting engagement in ways not immediately obvious from traditional AI use cases.

Implementation Considerations

When implementing EDA for AI applications, several key considerations ensure the system’s effectiveness and reliability:

  • Choosing the Right Event Broker: Select a robust event broker capable of handling the volume and variety of events, such as Apache Kafka, RabbitMQ, Amazon EventBridge, or Google Cloud Pub/Sub. The choice depends on factors like scalability, latency, and integration with existing systems.
  • Designing Events and Event Schemas: Define clear and consistent event schemas to ensure all components understand the structure and meaning of the events, including event type, payload, and metadata. This is crucial for maintaining interoperability and avoiding errors in event processing.
  • Handling Failures and Retries: Implement mechanisms to handle event processing failures, such as retries with exponential backoff, dead-letter queues for unprocessed events, or alerting systems for manual intervention. This ensures system resilience, especially in high-volume AI applications.
  • Monitoring and Debugging: Use monitoring tools to track event production, consumption, and processing times, identifying bottlenecks and ensuring system performance. Tools like Application Real-Time Monitoring Service (ARMS) at Alibaba Cloud ARMS can be instrumental for long-term operations and maintenance.
  • Security and Compliance: Ensure the event-driven system adheres to security best practices, such as encryption of event data, access controls, and compliance with relevant regulations like GDPR or HIPAA, to protect sensitive AI data and maintain trust.

Comparative Analysis: Challenges and Solutions

To further illustrate, consider the following table comparing challenges in AI applications and how EDA addresses them, based on insights from industry practices:

To further illustrate, consider the following table comparing challenges in AI applications and how EDA addresses them, based on insights from industry practices:This table, derived from Alibaba Cloud Community, highlights how EDA tackles specific AI challenges, reinforcing its suitability for these applications.

Conclusion and Future Outlook

EDA provides a flexible and scalable framework that is particularly well-suited for AI applications. By leveraging patterns such as asynchronous inference, real-time data updates, event-triggered actions, and component decoupling, organizations can build AI systems that are responsive, efficient, and adaptable to changing requirements. The use cases, from chatbots to predictive maintenance, demonstrate practical applications that enhance business outcomes and user experiences.

Looking forward, as AI continues to advance and integrate more deeply into various aspects of business and society, the importance of robust, event-driven architectures will only grow. Technical leaders, particularly CTOs, can position their organizations at the forefront of this evolution by adopting EDA, delivering innovative and high-impact AI solutions that meet the demands of a dynamic digital landscape.

How AI is Revolutionising Behavioural Biometrics For Authentication

Advanced technology like AI is completely changing the world of behavioural biometrics. A new and much better era of secure authentication procedures is being developed. For instance, monitoring the individual characteristic traits of a user like their writing tempo, movement of the mouse, and pressing on touchscreens makes possible AI powered behavioural biometrics and gives verification continuously and without interruption, thus making the experience better for the user and securing it more efficiently.

The History of Behavioural Biometrics

From the very beginning, every authentication system that utilises passwords and biometrics like fingerprints have been vulnerable to bypassing. On the other hand, behavioural biometrics employ dynamic interactions and actions performed by users which are nearly impossible to imitate, strengthening the security framework. Here, AI plays a critical role by examining massive amounts of user interactions in real-time by detecting small deviations that conventional systems would miss.

AI’s Role in Enhancing Behavioural Biometrics

With AI, relevant traits can be extracted from user interactions, and behavioural data may be turned into measurable metrics. User validation allows the application of intrinsic pattern recognition and the more advanced methods like neural networks and support vector machines. Continuous monitoring enables instant detection of anomalies, which aids in responding rapidly to security risks and unauthorised access.

Real-World Applications and Metrics

  1. Fraud Detection in Financial Services: A leading European insurer implemented behavioural biometrics to analyse how claimants interacted with online forms, detecting unnatural typing patterns and navigation behaviours indicative of fraud. This led to a 40% reduction in fraudulent claims within six months.
  2. Enhanced Customer Experience: An American health insurance company used behavioural biometrics for customer authentication, recognising users based on their interaction patterns with the company’s app. This approach reduced average customer service call times by 30%, significantly improving customer satisfaction.
  3. Risk Assessment Accuracy: A life insurance provider in Asia incorporated behavioural biometrics to refine risk assessment models by analysing lifestyle patterns affecting health and longevity. This led to more accurate premium calculations and personalised insurance packages.

Privacy and Needed Ethics

The application of AI in behavioural biometrics comes with notable ethical and data privacy issues. Even though these systems increase security, they need to be applied with care and responsibility, given the nature of the data. Security, user privacy, and inclusivity need to be balanced very carefully. Approaches like federated learning and edge computing provide the means for AI models to be trained on the user’s device, which greatly minimises the danger of breaches and strengthens compliance with privacy laws such as the GDPR.

Challenges and Future Outlook

Though promising, behavioural biometrics struggle with privacy and accuracy, as well as general user acceptance. Businesses in the field need to fortify protections and gain consent from users to avoid oversharing sensitive information. Usability and security have to be balanced because excessive false acceptances or rejections can undermine user trust in the system. Building trust requires addressing cultural differences along with the need for openness. Incorporating ethics focused on privacy, consent, and robust security makes the system more reliable.

With the advancement of technology, the integration of AI with behavioral biometrics will enhance authentication systems across numerous industries, providing users the ideal security and convenience.

RISC-V Chip Adoption Driven by a Strategic Policy Set to Launch by China’s 2025

In a landmark move poised to reshape its technological landscape, China is gearing up to launch its inaugural national policy championing the adoption of RISC-V chips. This strategic initiative, slated for release as early as March 2025, marks a significant step in the country’s quest to pivot away from Western-dominated semiconductor technologies and bolster its homegrown innovation amid escalating global tensions.

Insiders familiar with the development reveal that the policy has been meticulously crafted through a collaborative effort involving eight key government entities. Among them are heavyweights like the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Science and Technology, and the China National Intellectual Property Administration. Together, these bodies aim to cement RISC-V’s role as a cornerstone of China’s burgeoning tech ecosystem, fostering an environment ripe for domestic chip development and deployment.

The mere whisper of this policy has already sent ripples through the financial markets, igniting a wave of optimism among investors. On the day of the leak, Chinese semiconductor stocks staged an impressive rally. The CSI All-Share Semiconductor Products and Equipment Index, which had been languishing earlier, reversed course to surge by as much as 2.5%. Standout performers included VeriSilicon, which hit its daily trading cap with a 10% spike, alongside ASR Microelectronics, Shanghai Anlogic Infotech, and 3Peak, whose shares soared between 8.6% and an eye-catching 15.4% in afternoon trading.

At the heart of this policy push lies RISC-V, an open-source chip architecture that’s steadily carving out a global niche as a versatile, cost-effective rival to proprietary giants like Intel’s x86 and Arm Holdings’ microprocessor designs. Unlike its high-powered counterparts, RISC-V is often deployed in less demanding applications—think smartphones, IoT devices, and even AI servers—making it a pragmatic choice for a wide swath of industries. In China, its allure is twofold: slashed development costs and, critically, its freedom from reliance on U.S.-based firms, a factor that’s taken on heightened urgency amid trade restrictions and geopolitical friction.

Until now, RISC-V’s rise in China has been organic, driven by market forces rather than official mandates. This forthcoming policy changes the game, thrusting the architecture into the spotlight as a linchpin of Beijing’s broader campaign to achieve technological self-sufficiency. The timing is no coincidence—U.S.-China relations remain strained, with American policymakers sounding alarms over China’s growing leverage in the RISC-V space. Some U.S. lawmakers have even pushed to curb American companies’ contributions to the open-source platform, fearing it could turbocharge China’s semiconductor ambitions.

China’s RISC-V ecosystem is already buzzing with activity, spearheaded by homegrown innovators like Alibaba’s XuanTie division and rising star Nuclei System Technology, both of which have rolled out commercially viable RISC-V processors. The architecture’s flexibility is proving especially attractive in the AI sector, where models like DeepSeek thrive on efficient, lower-end chips. For smaller firms chasing affordable AI solutions, RISC-V offers a tantalizing blend of performance and price—a trend that could gain serious momentum under the new policy.

Sun Haitao, a manager at China Mobile System Integration, underscored the pragmatic appeal of RISC-V in a recent statement. “Even if these chips deliver just 30% of the performance of top-tier processors from NVIDIA or Huawei,” he noted, “their cost-effectiveness becomes undeniable when you scale them across multiple units.” This scalability could prove transformative for industries looking to maximize output without breaking the bank.

As China prepares to roll out this groundbreaking policy, the global tech community is watching closely. For Beijing, it’s a calculated gambit to secure its place at the forefront of the semiconductor race—one that could redefine the balance of power in a world increasingly divided by technology.

Opinion: AI Will Never Gain Consciousness

Artificial intelligence will never become a conscious being due to the lack of aspirations that are inherent in humans and other biological species. This statement was made by Sandeep Naiwal, co-founder of Polygon and the AI ​​company Sentient, in a conversation with Сointelegraph.

The expert does not believe that the end of the world is possible due to artificial intelligence gaining consciousness and seizing power over humanity.

Nailwal was critical of the theory according to which intelligence arises accidentally as a result of complex chemical interactions or processes. Although they can lead to the emergence of complex cells, there is no talk of the emergence of consciousness, the entrepreneur noted.

The co-founder of Polygon also expressed concerns about the risks of surveillance of people and the restriction of freedoms by centralized institutions with the help of artificial intelligence. Therefore, AI should be transparent and democratic, he believes.

[…] Ultimately, global AI, which can create a world without borders, must be controlled by every person,” Nailwal emphasized.

He added that everyone should have a personal artificial intelligence that is loyal and protects against the neural networks of influential corporations.

Recall that in January, Simon Kim, CEO of the crypto venture fund Hashed, expressed confidence that the future of artificial intelligence depends on a radical shift: opening the “black box” of centralized models and creating a decentralized, transparent ecosystem on the blockchain.

Agentic AI: Pioneering Autonomy and Transforming Business Landscapes

(um) let’s make tech work for us

The new autonomous systems labelled AI agents represent the latest evolution of AI technology and mark a new era in business. AI agents, in contrast to traditional models of AI that simply follow commands given to them and emit outputs in a specific format, work with a certain level of freedom. According to Google, these agents are capable of functioning on their own, without needing supervision from a human all of the time. The World Economic Forum describes them as systems that have sensors to see and effectors to interact with the environment. AI agents are expected to transform industries as they evolve from rigid, rule-based frameworks to sophisticated models adept at intricate decision-making . With unprecedented autonomy comes equally unprecedented responsibility. The additional benefits agentic AI technology brings is accompanied by unique challenges that invite careful consideration, planning, governance, and foresight.

The Mechanics of AI Agents: A Deeper Dive

Traditional AI tools, such as Generative AI (GenAI) or predictive analytics platforms, rely on predefined instructions or prompts to deliver results. In contrast, AI agents exhibit dynamic adaptability, responding to real-time data and executing multifaceted tasks with minimal oversight. Their functionality hinges on a trio of essential components:

  • Foundational AI Model: At the heart of an AI agent lies a powerful large language model (LLM), such as GPT-4, LLama, or Gemini, which provides the computational intelligence needed for understanding and generating responses.
  • Orchestration Layer: This layer serves as the agent’s “brain,” managing reasoning, planning, and task execution. It employs advanced frameworks like ReAct (Reasoning and Acting) or Chain-of-Thought prompting, enabling the agent to decompose complex problems into logical steps, evaluate outcomes, and adjust strategies dynamically—mimicking human problem-solving processes.
  • External Interaction Tools: These tools empower agents to engage with the outside world, bridging the gap between digital intelligence and practical application. They include:
    • Extensions: Enable direct interaction with APIs and services, allowing agents to retrieve live data (e.g., weather updates or stock prices) or perform actions like sending emails.
    • Functions: Offer a structured mechanism for agents to propose actions executed on the client side, giving developers fine-tuned control over outputs.
    • Data Stores: Provide access to current, external information beyond the agent’s initial training dataset, enhancing decision-making accuracy.

This architecture transforms AI agents into versatile systems capable of navigating real-world complexities with remarkable autonomy.

Multi-Agent Systems: The Newest Frontier

The Multi-Agent System (MAS) market is in for tremendous growth – Mckinsey gold, with a staggering predicted growth rate of nearly 28% by 2030. Recently Bloomberg predicted AI breakthroughs will soon give rise to multi-agent systems, collaborative networks of AI agents working collaboratively towards ambitious objectives. These systems promise scalability, surpassing the benefits of single operating agents.

Artificial imagination translates to a smart city. Imagine multiple AI agents working alongside each other. One controlling the signals, another managing traffic directing units, and an extra aiding with alert responder rerouting. And all of this is happening in real time!

Governance is key here to prevent a systemic failure, restricting conflicting commands that can stipulate paralysis or possibly force dysfunctions.Guaranteeing multi-agent systems possibility should be able to provide semi-standardized freedom, however, the benefits are the only ensured protocols alongside need need to be prescribed to.

Opportunities and Challenges

The potential of AI agents is game changing, but their independence creates grave concerns. The World Economic Forum highlights some challenges that companies must deal with: 

  •  Risks Associated with Autonomy: Ensuring safety and reliability becomes all the more difficult as agents become more independent. For example, an unmonitored agent could execute resource allocation that would trigger operational failures with cascading effects. 
  • Lack of Accountability: As trust is already fragile due to the opaque reasoning of ‘black box’ behavior, it becomes even more crucial within high risk healthcare or finance situations. Ensuring transparency and accountability becomes non negotiable.
  • Risks Surrounding Privacy and Security: A lot of sensitive information puts trust in jeopardy. An agent functioning effectively only having access to a multitude of sensitive systems and datasets makes one pose the question ‘How do we grant sufficient permissions without compromising security?’ Strong policies are needed to enforce standards to protect sensitivity and privacy while preventing breaches.

Some of these risks require guarding by taking proactive measures like consistent monitoring, adhering to ethical AI principles, human-in-the-loop oversight for garnering vital AI decision-making frameworks to retain control. Organizations need to deploy auditing tools that monitor and alter agent paths during deviations to regain control and maintain organizational goals.

The Human-AI Partnership

Even though AI agents have an independent function, their purpose is not to replace human reasoning but rather to augment it. How the EU AI Act works reminds us of the necessity of human intervention in processes like security or legal compliance which are sensitive. The best situation is one where both humans and machines work together: agents perform the monotonous and repetitive work that requires processing large amounts of data—this enables humans to be more strategic, creative, and ethical.

In a logistics company, for instance, an AI agent may be able to optimize the delivery routes using traffic information autonomously, and a manager can use their judgment and approve the AI’s plan using customer preferences or other unforeseen factors. This enables human control and supervision to be maintained while efficiency is also enhanced.

Guidelines for Implementing Agentic AI Strategically

Both Google Analytics and the World Economic Forum are integrated around a central idea. The responsible use of AI agents can result in outstanding value creation and unparalleled innovation. To reap value with manageable risks, businesses need to employ the following practices:  

  • Develop Skills: Prepare workforce on the building, implementation, and administration of AI agents to ensure the effective application of AI technology.  
  • AI Ethics: Develop appropriate business governance frameworks that adhere to the international benchmarks, the EU AI Act for instance, requiring fair and accountable operations of the agents.  
  • Ethics Boundaries: Delegated agent discretion must come with boundary safeguards to eliminate boundary overreach or lateral decision making through establishing unique controls.  
  • Validation Check: Enable behavioral modification to organizational needs through active auditing of the agent, stress testing em, and refining organizational value objectives.


Final thoughts

The integration of reasoning and planning gives Agent AI the ability to act on its own, AI agents mark a pivotal leap in the evolution of artificial intelligence. Their potential to change industries like personalized healthcare or smart cities is phenomenal but using AI carelessly is a grave mistake. For AI programs to be dependable companions, trust and security must anchor their development.

Organizations that find the right balance to enable agents to innovate while maintaining human supervision would be the ones leading the charge in this technological revolution. Agentic AI transforms it from an ordinary business tool to a paradigm shift re-imagining autonomy. Such a future is bound to belong to those who embrace its potential with clarity and caution.

Grok Names Elon Musk as the Main Disinformer

Elon Musk is the main disseminator of disinformation in X, according to the AI ​​assistant Grok from the entrepreneur’s startup xAI, integrated into his social network.

The billionaire has a huge audience and often spreads false information on various topics, the chatbot claims. Among other disinformers, according to the neural network: Donald Trump, Robert F. Kennedy Jr., Alex Jones and RT (Russian television).

Trump shares false claims about the election, Kennedy Jr. – about vaccines, and Alex Jones is known for spreading conspiracy theories. Russian television lies about political issues, Grok added.

Grok’s Top Disseminators of Disinformation. Data: X.

The chatbot cited Rolling Stone, The Guardian, NPR, and NewsGuard as sources of information.

The selection process involved analyzing multiple sources, including academic research, fact-checking organizations, and media reports, to identify those with significant influence and a history of spreading false or misleading information,” the AI ​​noted.

The criteria for compiling the rankings included the volume of false information spread, the number of followers, and mentions in credible reports.

When asked for clarification, Grok noted that the findings may be biased because the sources provided are mostly related to the funding or opinions of Democrats and liberals.

Recall that in January, artificial intelligence was used to spread fake news about the fires in Southern California.

A similar situation arose after Hurricane Helene.

Google Unveils Memory Feature for Gemini AI Chatbot

Google has launched a notable update to its Gemini AI chatbot, equipping it with the ability to remember details from previous conversations, a development experts are calling a major advancement.

In a blog post released on Thursday, Google detailed how this new capability allows Gemini to store information from earlier chats, provide summaries of past discussions, and craft responses tailored to what it has learned over time.

This upgrade eliminates the need for users to restate information they’ve already provided or sift through old messages to retrieve details. By drawing on prior interactions, Gemini can now deliver answers that are more relevant, cohesive, and enriched with additional context pulled from its memory. This results in smoother, more personalized exchanges that feel less fragmented and more like a continuous dialogue.

Rollout Plans and Broader Access
The memory feature is first being introduced to English-speaking users subscribed to Google One AI Premium, a $20 monthly plan offering enhanced AI tools. Google plans to extend this functionality to more languages in the near future and will soon bring it to business users via Google Workspace Business and Enterprise plans.

Tackling Privacy and User Control
While the ability to recall conversations offers convenience, it may raise eyebrows among those concerned about data privacy. To address this, Google has built in several options for users to oversee their chat data. Through the “My Activity” section in Gemini, individuals can view their stored conversations, remove specific entries, or decide how long data is kept. For those who prefer not to use the feature at all, it can be fully turned off, giving users complete authority over what the AI retains.

Google has also made it clear that it won’t use these stored chats to refine its AI models, putting to rest worries about data being repurposed.

The Race to Enhance AI Memory

Google isn’t alone in its efforts to boost chatbot memory. OpenAI’s Sam Altman has highlighted that better recall is a top demand from ChatGPT users. Over the last year, both companies have rolled out features letting their AIs remember things like a user’s favorite travel options, food preferences, or even their preferred tone of address. Until now, though, these memory tools have been fairly limited and didn’t automatically preserve entire conversation histories.

Gemini’s new recall ability marks a leap toward more fluid and insightful AI exchanges. By keeping track of past talks, it lets users pick up where they left off without losing the thread, proving especially handy for long-term tasks or recurring questions.

As this feature spreads to more users, Google underscores its commitment to transparency and control, ensuring people can easily manage, erase, or opt out of data retention altogether.

Sam Altman talks about the features of GPT-4.5 and GPT-5

OpenAI CEO Sam Altman shared the startup’s plans to release GPT-4.5 and GPT-5 models. The company aims to simplify its product offerings by making them more intuitive for users.

Altman acknowledged that the current product line has become too complex, and OpenAI is looking to change that.

We hate model selection as much as you do and want to get back to magical unified intelligence,” he wrote.

GPT-4.5, codenamed Orion, will be the startup’s last AI model without a “chain of reasoning” mechanism. The next step is to move toward more integrated solutions.

The company plans to combine the o and GPT series models, creating systems capable of:

  • using all available tools;
  • independently determining when deep thinking is needed and when an instant solution is enough;
  • adapting to a wide range of tasks.

GPT-5 integrates various technologies, including o3. Other innovations will include canvas capabilities (Canvas-mode), search, deep research (Deep Research) and much more.

Free GPT-5 subscribers will get unlimited access to the model’s tools on standard settings. Plus and Pro account holders will be able to use advanced features with a higher level of intelligence.

Regarding the release dates of GPT-4.5 and GPT-5, Altman wrote in the comments to the tweet about “weeks” and “months“, respectively.

According to Elon Musk, ChatGPT’s competitor, the Grok 3 chatbot, is in the final stages of development and will be released in one to two weeks. Reuters writes about this.

Grok 3 has very powerful reasoning capabilities, so in the tests we’ve done so far, Grok 3 outperforms all the models that we know of, so that’s a good sign,” the entrepreneur said during a speech at the World Summit of Governments in Dubai.

Recall that Altman turned down Musk and a group of investors’ bid to buy the non-profit that controls OpenAI for $97.4 billion. The startup’s CEO admitted that this was an attempt to “slow down” the competing project.