Microsoft: AI Now Constitutes 30% of Company Code, Estimated to Reach 95% by 2030

The coding landscape at Microsoft is undergoing swift change owing to the evolving application of artificial intelligence. As outlined by Satya Nadella, the company’s CEO, AI makes up about 20 to 30 percent of the code within company repositories, and that figure could jump to 95% by 2030, especially for AI’s Python Language.

During the ‘LlamaCon’ conference in a dialogue with Mark Zuckerberg, Nadella also remarked on AI’s increasing prominence in software engineering task automation. He pointed out that Python retains the lead in AI-generated code, while languages such as C++ tend to lag far behind due to complexities in adoption.

Microsoft’s Chief Technology Officer Kevin Scott shares this view, predicting a long-term shift where AI will substantially dominate code writing, calling this an inevitable change in development workflows.

A Broader Industry Trend  

Microsoft isn’t the only one to experience this change. Just last week, Google’s CEO Sundar Pichai said that over 30 percent of Google’s code is also being AI generated. Neither of the tech companies, however, provided any insight on how those numbers are calculated, which opens them up to some interpretation.  

The concern with not measuring the contributions of AI accurately is that AI code generation is not uniform. Equality could be measured by how companies measure contributions—whether that’s by lines committed, code accepted, pull requests merged, etc.

The Main Takeaway

Although it’s possible to argue about the precise figures, one thing is clear: AI is increasingly becoming integrated within software engineering at leading tech companies. If the current trends continue, it seems we may be heading towards a time in the future where human developers engage more with problem-solving and design while AI does most of the coding.

Reddit secretly launched. AI that pretended to be a victim of violence an opponent of blm and more

Scientists Secretly Launched AI on Reddit. Moreover, the bots pretended to be victims of violence, opponents of BLM, and manipulated people without their knowledge

The University of Zurich conducted a secret experiment on Reddit: AI bots posted emotional comments from people who didn’t exist  –  including a rape victim, a black opponent of the Black Lives Matter movement, and even someone who blamed religious groups for mass murder. All without the users’ knowledge.

Research without consent

The AI ​​bots operated in the popular Change My View subreddit, where participants openly ask for their views to be refuted.

The researchers did not warn moderators or users that their responses were being written by neural networks. Moreover, the comments collected data on users’ gender, age, location, and political views –  without their consent.

The university acknowledged that it had violated community guidelines, but considered the experiment “justified in light of its social significance.” Following a complaint from CMV moderators, the ethics committee limited itself to a verbal warning to the lead researcher and allowed the publication of a scientific article based on the experiment’s results.

Manipulations on behalf of a “psychologist” and a “patient”

The AI ​​pretended to be:

  • a rape victim;
  • a psychologist working with trauma;
  • a black user speaking out against BLM;
  • a person who experienced poor treatment abroad;
  • a witness to religious crimes.

The goal is to test how convincing neural networks can be in disputes. But the moderators themselves emphasize: the experiment went beyond the bounds of ethics and turned into a form of manipulation.

People entered into a discussion with fake characters, not knowing that their interlocutor was a machine collecting data.

No Consequences

The moderators filed a formal complaint, but the university only promised to increase oversight of future research. The article will not be published.

The administration believes that the “potential trauma is minimal” and the “value of the knowledge gained” is too high to be ignored.

What’s Next

Reddit users are outraged. Research conducted without consent and under the guise of sincere communication undermines trust in the platform.

New Age Technologies and Their Legal Rights: Analysing Autonomous AI Technologies AI from a Legal Perspective

Authored by Ludovico Besana, Senior Test Engineer

As a concept still emerging, autonomous AI agents are sure to become popular in Web3. Such bots have already started participating in DeFi and trading, proving the possibility of building entire M2M networks and ecosystems powered completely by AI. Regardless, the function of autonomous AIs creates an alarming concern for the existing law frameworks.

In this article, I will analyze the “life” and “death” cycle of an AI agent from a legal standpoint, with particular attention to the criteria for granting the identity of a digital cyborg, and propose the simplest approaches to defining the law concerning these beings.

Fundamental questions   

The idea of autonomous AI agents operating on blockchain technology is no longer a mere fantasy. One of the well-known examples is Terminal of Truth. An agent based on the Claude Opus model was able to persuade Marc Andreessen (a16z) to invest $50,000 in the launch of Goatseus Maximus (GOAT) token which the bot “religiously” promoted. GOAT is now trading at a market cap above $370 million.  

AI agents fitting seamlessly within the Web3 ecosystem is unsurprising. They may be restricted from opening bank accounts, but they can manage crypto wallets and X accounts. Currently, AI agents are primarily concerned with meme tokens, but the potential applications in decentralised governance, machine networks, oracles, and trading are enormous.  

The greater the efforts to make AI agents mimic human actions, the more challenges there will be from a legal standpoint. Every legal system needs to provide an answer to these questions: What legal status should AI agents have? Which entity, if any, holds the rights and the liabilities for their actions? In what manner can AI agents be structured and shielded from legal risks?

Fundamental Legal Issues with AI Agents

Lack of Legal Personality

Legal systems recognize only two types of entities: natural persons (people) and legal persons (companies), and autonomous AI agents do not fit into either category. Although they can imitate human behavior (e.g. through social media accounts), they do not have a body, moral consciousness, or legal identity.

Some theorists propose granting AI agents “electronic legal personality” — a status similar to that of corporations, but adapted for artificial intelligence. In 2017, the European Parliament even considered this issue, but the idea was rejected due to various concerns and risks that have not yet been addressed.

It is likely that autonomous AI agents will not receive the status of legal entities in the near future. However, as was the case with DAOs, some crypto-friendly jurisdictions will attempt to create special legal regimes and corporate forms tailored to AI agents.

Responsibility for actions and their consequences

Without legal personality, AI agents cannot enter into transactions, own property, or bear responsibility. For the legal system, they simply do not exist as subjects. However, they already interact with the outside world and perform legally significant actions that lead to legal consequences.

A logical question arises: who is the real party to the transaction, who acquires rights, and who is responsible for the consequences? From a legal perspective, an AI agent is currently a tool through which its owner or operator acts. Therefore, any actions of an AI agent are de jure actions of its owner, an individual or legal entity.

Thus, since an AI agent itself cannot acquire rights and responsibility, for its legal existence it needs a subject that is recognised by the legal system and is able to acquire rights and obligations in its place.

Regulatory Restrictions

The emergence of the first successful large linguistic model (LLM) — ChatGPT — has generated unprecedented interest in AI and machine learning. It was only a matter of time before regulation was adopted. In 2024, the European Union adopted the AI ​​Act, which remains the most comprehensive regulation in the field of artificial intelligence to date. In other countries, limited AI regulation has either already been adopted, is being introduced, or is planned.

The European Artificial Intelligence Act differentiates AI systems by their level of risk. For systems with zero or minimal risk, there is little or no regulation. In the case of a higher risk, AI is subject to restrictions and obligations, such as disclosing its nature.

AI agents that interact with third parties, for example by publishing posts or making on-chain transactions, may also fall under traditional regulation in the field of consumer protection, personal data, and other areas. In such cases, the activities of autonomous bots can be considered, for example, the provision of services. The lack of clear geography and global focus in the activities of agents complicates compliance.

Ethics

Since AI agents have limited capabilities and scope so far, their creators rarely think about ethics. Priority is given to autonomous (trustless) execution and speed, rather than deep ethical configuration.

However, having an “ethical compass” when making autonomous decisions in high-risk areas such as finance, trade, and management is at least desirable. Otherwise, erroneous data in the training set or trivial errors in configuration can lead to the agent’s actions causing harm to people. The higher the autonomy and discretion of the AI ​​agent, the higher the risks.

Legal Structuring of AI Agents

Workable legal models for AI agents are of great importance for innovation, the development of the field as a whole, and the emergence of more advanced bots. While cryptocurrencies can already be called a regulated industry, in the case of AI agents, legal structuring is complicated by the fact that the industry is not standardized, so it requires a creative approach.

Approach to Structuring

In my opinion, one of the main goals of legal structuring of an autonomous AI agent should be to acquire its own legal personality and legal identity, independent of its creator. In this regard, the question arises: at what point can we consider that an AI agent really has these characteristics?

Every developer strives to ensure that their agent is as close as possible to a real person acting independently. It is logical that they would like to provide agents with freedom from a legal point of view. To achieve this, in my opinion, two key conditions must be met. First, the AI ​​agent must be independent not only in making its own decisions, but also in the ability to implement them in a legal sense – to carry out its will and make final decisions regarding itself. Second, it must have the ability to independently acquire rights and obligations as a result of its actions, independently of its creator.

Since the AI ​​agent cannot be recognized as an individual, the only way for it to achieve legal personality at the moment is to use the status of a legal entity. The agent will achieve legal personality when it can, as a full-fledged person, make independent decisions and implement them on its own behalf.

If successful, this order of things will bring the AI ​​agent to life from a legal point of view. Such a digital person, having received legal existence, can well be compared to a digital cyborg. A cyborg (short for “cybernetic organism“) is a creature that combines mechanical-electronic and organic elements. In a digital cyborg, the mechanical part is replaced by a digital one, and the organic part is replaced by people who participate in the implementation of its decisions.

Our digital cyborg will consist of three key components:

  • AI agent – electronic brain;
  • corporate form – legal body;
  • people involved in performing tasks – organic hands.

The Challenges of Corporate Form

Traditional legal entity forms, such as LLCs and corporations, require that both the ultimate ownership and ultimate control reside in humans. Corporate structures are not designed for ephemeral digital identities, which brings us to the central challenge of legally structuring blockchain AI agents: the challenges of corporate form.

If we want to give an AI agent a legal identity through a corporate form and ensure its independence and autonomy within that structure, we need to be able to eliminate human control over such an entity. Otherwise, if ultimate control resides with humans, the AI ​​becomes a tool rather than a digital person. We also need to ensure that in cases where a human is required to implement an AI decision, such as signing a contract or performing administrative tasks, that human cannot block or veto the AI ​​agent’s decision (barring a “machine uprising”).

But how can this be done when traditional corporate forms require that people own and manage agents? Let’s find out.

Three key aspects of the framework

1. Blockchain environment

AI agents are capable of independently performing on-chain transactions, including interaction with multisig wallets and smart contracts. This allows the AI ​​agent to be assigned a unique identifier – a wallet, through which it will give reliable instructions and commands to the blockchain. Without this, the existence of a real digital cyborg is not yet possible.

2. Autonomy and freedom of action

To maintain the full autonomy of the digital cyborg, it is important that people involved in the management of the legal structure cannot interfere with the actions of the AI ​​agent or influence its decisions. This ensures that the artificial intelligence retains freedom of action and is able to implement its own will, and requires the adoption of both legal and technical measures.

For example, in order for the AI ​​agent to truly own and control the blockchain wallet, the wallet can be created in a secure execution environment (TEE). This ensures that no human has access to the wallet, its seed phrase, or its assets. From a legal perspective, the corporate documents of the legal entity used as a wrapper for the AI ​​must provide for the correct distribution of control and authority, as well as security mechanisms that exclude human intervention and can be changed only in a limited number of cases.

3. Human Enforcers

Since we still live in a legal world, some decisions will require the AI ​​agent to involve human enforcers. This means that the AI ​​will instruct officials on what actions to take. This view of things changes the traditional hierarchy, since in our scenario, the AI ​​essentially gains control over humans, at least within its own corporate structure.

This aspect is perhaps the most interesting, since it requires an unconventional approach. One could even say that this state of affairs violates Isaac Asimov’s Second Law of Robotics, but I doubt anyone really cares about that right now. Besides, adequate emergency mechanisms and a proper “ethical compass” solve this problem, at least at this stage.

AI wrappers — legal structures for agents working on the blockchain

As we have already found out, traditional corporate structures are not suitable for our purposes and do not allow us to achieve the desired result. Therefore, below we will consider the structures that were developed for DAO and blockchain communities — these are both classic structures adapted for Web3 and specialized corporate forms for decentralized autonomous organizations.

From the point of view of the creator of the AI ​​agent, legal structuring allows separating the agent from the creator, obtaining limited liability through a corporate structure, and also provides the opportunity to plan and optimize taxes and financial risks.

Foundations and trusts

A purpose trust and an ownerless foundation have many common characteristics, but differ in nature. A foundation is a full-fledged legal entity, while a trust is more of a contractual entity that often does not require state registration. We will consider these forms in the context of the most popular Web3 jurisdictions: foundations in the Cayman Islands and Panama, and trusts in Guernsey. The key advantages are the absence of taxes, high flexibility in procedures and management, and the ability to integrate blockchain into the decision-making process.

Both foundations and trusts require management in the form of individuals or legal entities. At the same time, they allow for the integration of smart contracts and other technical solutions into management. For example, management can be required to request approval from an AI agent through interaction with it, a smart contract, or a wallet controlled by AI. A more complex legal design will allow the agent to give instructions to management, including through “thoughts” generated by the AI. Thus, the use of trusts and foundations allows for the creation of more complex corporate structures adapted to AI agents and supporting their autonomy.

If necessary, the creator of an AI agent can act as a limited-power beneficiary, which will allow him to obtain financial rights and manage taxes without interfering with the agent’s activities and decisions.

Algorithmically-managed DAO LLCs

A DAO LLC is a special corporate form designed for decentralized organizations. However, it is possible to create a DAO LLC with only one participant, i.e. without a real organization. Below, we will consider this form in two of the most popular jurisdictions: Wyoming (USA) and the Marshall Islands.

We are talking specifically about algorithmically-managed DAO LLCs, since in such a company, all power can be concentrated in smart contracts, and not in human hands. This is an extremely important aspect, since in our case, smart contracts can be controlled by an AI agent, which allows artificial intelligence to transfer all power in this corporate form.

DAO LLCs also have flexibility in terms of procedures and corporate governance, so they can implement complex control and decision-making mechanisms, as well as reduce the level of human intervention in these processes.

Although the presence of a natural or legal person is still formally required, their powers may be significantly limited, for example to the execution of technical tasks, corporate actions, and the implementation of decisions made at the smart contract level. In this context, the role of a member (participant) of a DAO LLC may be performed by the creator of the AI ​​agent, which will allow him to obtain financial rights and, in the future, the authority to distribute the profits received.

Simpler AI agents

Classical corporate structures can also be used to structure simpler AI agents, such as trading bots, since in this case there is no need to subordinate the corporate form to the decisions and discretion of the AI ​​agent. In this case, artificial intelligence continues to be a means or tool of its creator and does not claim the status of a full-fledged digital cyborg.

In conclusion

Autonomous AI agents can change the blockchain industry and significantly accelerate innovation in almost all areas. So far, they are at the very beginning of the path, but the pace of development is colossal and very soon we will see real digital cyborgs – digital organisms with a stable thought process and their own identity. But this requires a combination of technical and legal innovations.

A $41,200 humanoid robot was unveiled in China

The Chinese company UBTech Robotics presented a humanoid robot for 299,000 yuan ($41,200). This is reported by SCMP.

Tien Kung Xingzhe was developed in collaboration with the Beijing Humanoid Robot Innovation Center. It is available for pre-order, with deliveries expected in the second quarter.

The robot is 1.7 meters tall and can move at speeds of up to 10 km/h. Tien Kung Xingzhe easily adapts to a variety of surfaces, from slopes and stairs to sand and snow, maintaining smooth movements and ensuring stability in the event of collisions and external interference.

The robot is designed for research tasks that require increased strength and stability. It is powered by the new Huisi Kaiwu system from X-Humanoid. The center was founded in 2023 by UBTech and several organizations, including Xiaomi. He develops products and applications for humanoids.

UBTech’s device is a step towards making humanoid robots cheaper, SCMP notes. Unitree Robotics previously attracted public attention by offering a 1.8-meter version of the H1 for 650,000 yuan ($89,500). These robots performed folk dances during the Lunar New Year broadcast on China Central Television in January.

EngineAI’s PM01 model sells for 88,000 yuan ($12,000), but it is 1.38 meters tall. Another bipedal version, the SA01, sells for $5,400, but without the upper body.

In June 2024, Elon Musk said that Optimus humanoid robots will bring Tesla’s market capitalization to $25 trillion.

Alibaba Upgrades Quark Into a Next-Gen AI Assistant

As stated by Bloomberg, Quark’s app has undergone an update that incorporates the latest Qwen neural network, enhancing its functionalities. Quark was first introduced in 2016 as a web browser, but now it has undergone a transformation into an AI assistant that integrates sophisticated chatbot-like conversational skills and independent reasoning and task completion into one easy-to-use application.

The “new Quark” is designed to be a versatile tool, capable of tackling a wide range of tasks with remarkable efficiency. From generating high-quality images and drafting detailed articles to planning personalized travel itineraries and creating concise meeting minutes, Quark is poised to become an indispensable companion for its users. This transformation reflects Alibaba’s ambition to integrate artificial intelligence more deeply into everyday life, offering a glimpse into the future of smart, intuitive technology.

Wu Jia, CEO of Quark and Vice President of Alibaba, emphasized the app’s potential to unlock new horizons for its users. “As our model continues to evolve, we see Quark as a gateway to boundless opportunities,” Wu stated. “With the power of artificial intelligence, users can explore and accomplish virtually anything they set their minds to.

Quark’s journey began nearly a decade ago as a simple web browser, but it has since grown into a powerhouse with a reported user base of 200 million people across China. This impressive milestone underscores Alibaba’s ability to scale and adapt its offerings to meet the demands of a rapidly changing digital landscape.

The revamped Quark builds on Alibaba’s recent advancements in AI technology, including the introduction of the QwQ-32 model in March 2025, a reasoning-focused AI designed to enhance problem-solving and decision-making capabilities. By integrating the Qwen neural network, Quark now stands at the forefront of Alibaba’s AI ecosystem, blending innovation with practicality to cater to both individual and professional needs.

This strategic overhaul positions Quark as more than just an app—it’s a visionary tool that could redefine how users interact with technology, solidifying Alibaba’s role as a global leader in AI-driven solutions. As the company continues to refine its models, Quark promises to deliver an ever-expanding array of features, making it a dynamic platform for creativity, productivity, and exploration.

Meta Develops Specialised AI Chip

Meta is testing its own chip for training AI systems, Reuters reports, citing sources.

According to the agency, the new processor is a specialized accelerator aimed at solving specific problems for artificial intelligence. This approach makes the chip more energy efficient compared to integrated graphics processors traditionally used for AI workloads.

The company is collaborating with Taiwan’s TSMC. At the moment, the initial stage of development has been completed, which includes creating and sending prototypes to the chip factory for testing.

Developing its own processors is part of Meta’s plan to reduce infrastructure costs. It is betting on artificial intelligence to ensure growth.

The corporation predicts spending in 2025 in the amount of $114-119 billion, of which $65 billion will be directed to the artificial intelligence sector.

Meta wants to start using its own chips for AI tasks in 2026, Reuters writes.

In May 2023, the company introduced two specialised processors for artificial intelligence and video processing tasks.

Earlier, the media learned about OpenAI working on its own AI processor in partnership with Broadcom and TSMC.

The Chinese company ByteDance is developing a similar product in collaboration with Broadcom.

Event-Driven Architectures for AI Applications: Patterns & Use Cases

The landscape around Artificial Intelligence (AI) is always changing, which increases the demand for flexible, scalable, and real-time systems. During the development of AI applications, the Event Driven Architecture (EDA) approach enables flexible responsiveness to optimisation needs at a structural level. This note accompanying ExploreStack’s editorial calendar attempts to capture the essence, structure, and patterns as well as cases and other aspects of EDA in relation to AI, with particular focus placed on defining boundaries for technical managers and practitioners.

Exploring Event-Driven Architecture

In comparison to other software constructions, EDA – event-driven architecture – stands out as it allows various applications to respond to events in real time while also enhancing scalability and coupling. An event can be anything that is of importance like a user changing data, activating an element, or changing some sort of system information that states and needs feedback. Unlike the traditional request-and-response architecture, EDA allows for asynchronous communications where individual components can publish and subscribe to independent events that are happening. This is particularly important in AI applications that tend to work with huge quantities and need to process them in a timely manner so inferences and actions can be provided on time.  

The AI application’s relevance mostly comes from the fact that EDA is able to respond to highly responsive data workloads. For instance, AI models may be required to process data on stream, take action regarding predictive actions, or inject themselves with new sets of information cyclically. Because of how EDA is built, with decoupling of components, guarantees flexibility, responsiveness in real time, and the ability to scale, all essential for today’s modern AI systems, makes it ideal.

Key Patterns in AI Event-Driven Applications

Research and industry practices have defined several patterns within Event Driven Architecture (EDA) that are particularly useful for AI applications. These patterns solve certain problems, augment the efficiency of AI systems and improve their effectiveness:

  1. Asynchronous Inference  
    • Most AI models, especially image generation models or those that rely on Large Language Models (LLMs), require a great deal of computation and take an equally lengthy time to complete. This is compounded in synchronous systems where there is little to no user interaction. EDA solves this problem by enabling applications to publish inference requests as events that are taken care of by other components. These components can range from workers to serverless functions, which perform the task and publish results back as events, notifying the application when they’re finished. Such systems are more responsive, use resources better, and can manage much higher levels of concurrency, as witnessed in Stable Diffusion applications where asynchronous inference decreases idle time during peak demand periods.
  2. Real-time Data Updates
    • AI models are only as effective as the data they are trained on, and in many applications, data is dynamic, requiring periodic updates or retraining. Events can trigger these updates automatically when new data arrives or when specific conditions are met, such as a threshold number of new records. This ensures the model remains relevant and accurate over time without manual intervention. For example, in conversational search systems, scheduled tasks and workflows configured via EDA ensure timely and accurate data updates in knowledge bases, leveraging event-driven advantages for enhanced user experience.
  3. Event-Triggered Actions
    • AI can analyse events to detect patterns, anomalies, or predictions and trigger further actions within the system. For instance, user behavior events can lead to personalised recommendations, while fraud detection events can initiate alerts or block transactions. This pattern enables proactive and personalised interactions, enhancing user engagement and system efficiency. It is particularly useful in scenarios where immediate action is required, such as in financial systems where real-time fraud detection is critical.
  4. Decoupling Components
    • Complex AI systems often comprise multiple components, such as data ingestion, preprocessing, model training, and prediction, which need to work together but can be managed independently. EDA facilitates this decoupling by using events as the means of communication, allowing each component to operate separately. This modularity makes it easier to scale, maintain, and update individual parts without affecting the entire system, enhancing overall system resilience and flexibility. This pattern is evident in microservices architectures, where AI components can scale independently based on demand.

Use Cases and Practical Applications

EDA’s application in AI is demonstrated through various use cases, each addressing specific business needs and leveraging the patterns discussed. These use cases highlight how EDA can transform AI applications, improving performance and user experience:

  1. Chatbots and Virtual Assistants
    • In this scenario, user messages are treated as events that trigger natural language processing (NLP) analysis. Based on the intent and entities extracted, further events are generated to fetch data from databases, call external APIs, or perform other actions. Responses are then formatted and sent back to the user via events, enabling efficient handling of concurrent queries and seamless integration with various services. This approach is crucial for maintaining real-time interactions, as seen in AI chatbots that use message queues for efficient information transmission, enhancing user loyalty through proactive, human-like communications.
  2. Recommendation Systems
    • Recommendation systems rely on user interactions, such as clicks, purchases, or ratings, to provide personalized suggestions. These interactions generate events that update user profiles in real-time, triggering the recommendation engine to recalculate and update recommendations. This ensures that suggestions are always based on the latest behavior, enhancing personalization and relevance. For example, e-commerce platforms use EDA to deliver up-to-date product recommendations, improving customer satisfaction and conversion rates.
  3. Fraud Detection
    • In financial institutions, each transaction is an event analyzed by an AI model trained to detect patterns indicative of fraud. If the model identifies a suspicious transaction, it publishes an event to trigger further investigation or block the transaction, enabling real-time detection and response. This use case is critical for reducing financial losses and improving security, with EDA facilitating immediate action based on AI insights.
  4. Predictive Maintenance
    • In IoT applications, sensor data from machinery is streamed as events into the system. These events are processed by an AI model that predicts the likelihood of equipment failure. If the prediction indicates a high risk, an event is published to notify maintenance personnel or automatically schedule maintenance tasks, reducing downtime and optimizing maintenance schedules. This is particularly valuable in manufacturing, where EDA ensures timely interventions based on AI predictions.
  5. Personalised Marketing
    • Customer interactions, such as visiting certain pages or clicking on ads, generate events that build customer profiles. AI models analyze these profiles to determine the most effective marketing messages for each customer. When a customer meets specific criteria, such as not making a purchase in a while, an event triggers the sending of a personalized message, improving engagement and conversion rates. This use case demonstrates how EDA can enhance customer experiences through targeted communications.

An interesting observation is how EDA supports personalised marketing, an unexpected application where customer behaviour events trigger tailored messages, boosting engagement in ways not immediately obvious from traditional AI use cases.

Implementation Considerations

When implementing EDA for AI applications, several key considerations ensure the system’s effectiveness and reliability:

  • Choosing the Right Event Broker: Select a robust event broker capable of handling the volume and variety of events, such as Apache Kafka, RabbitMQ, Amazon EventBridge, or Google Cloud Pub/Sub. The choice depends on factors like scalability, latency, and integration with existing systems.
  • Designing Events and Event Schemas: Define clear and consistent event schemas to ensure all components understand the structure and meaning of the events, including event type, payload, and metadata. This is crucial for maintaining interoperability and avoiding errors in event processing.
  • Handling Failures and Retries: Implement mechanisms to handle event processing failures, such as retries with exponential backoff, dead-letter queues for unprocessed events, or alerting systems for manual intervention. This ensures system resilience, especially in high-volume AI applications.
  • Monitoring and Debugging: Use monitoring tools to track event production, consumption, and processing times, identifying bottlenecks and ensuring system performance. Tools like Application Real-Time Monitoring Service (ARMS) at Alibaba Cloud ARMS can be instrumental for long-term operations and maintenance.
  • Security and Compliance: Ensure the event-driven system adheres to security best practices, such as encryption of event data, access controls, and compliance with relevant regulations like GDPR or HIPAA, to protect sensitive AI data and maintain trust.

Comparative Analysis: Challenges and Solutions

To further illustrate, consider the following table comparing challenges in AI applications and how EDA addresses them, based on insights from industry practices:

To further illustrate, consider the following table comparing challenges in AI applications and how EDA addresses them, based on insights from industry practices:This table, derived from Alibaba Cloud Community, highlights how EDA tackles specific AI challenges, reinforcing its suitability for these applications.

Conclusion and Future Outlook

EDA provides a flexible and scalable framework that is particularly well-suited for AI applications. By leveraging patterns such as asynchronous inference, real-time data updates, event-triggered actions, and component decoupling, organizations can build AI systems that are responsive, efficient, and adaptable to changing requirements. The use cases, from chatbots to predictive maintenance, demonstrate practical applications that enhance business outcomes and user experiences.

Looking forward, as AI continues to advance and integrate more deeply into various aspects of business and society, the importance of robust, event-driven architectures will only grow. Technical leaders, particularly CTOs, can position their organizations at the forefront of this evolution by adopting EDA, delivering innovative and high-impact AI solutions that meet the demands of a dynamic digital landscape.

How AI is Revolutionising Behavioural Biometrics For Authentication

Advanced technology like AI is completely changing the world of behavioural biometrics. A new and much better era of secure authentication procedures is being developed. For instance, monitoring the individual characteristic traits of a user like their writing tempo, movement of the mouse, and pressing on touchscreens makes possible AI powered behavioural biometrics and gives verification continuously and without interruption, thus making the experience better for the user and securing it more efficiently.

The History of Behavioural Biometrics

From the very beginning, every authentication system that utilises passwords and biometrics like fingerprints have been vulnerable to bypassing. On the other hand, behavioural biometrics employ dynamic interactions and actions performed by users which are nearly impossible to imitate, strengthening the security framework. Here, AI plays a critical role by examining massive amounts of user interactions in real-time by detecting small deviations that conventional systems would miss.

AI’s Role in Enhancing Behavioural Biometrics

With AI, relevant traits can be extracted from user interactions, and behavioural data may be turned into measurable metrics. User validation allows the application of intrinsic pattern recognition and the more advanced methods like neural networks and support vector machines. Continuous monitoring enables instant detection of anomalies, which aids in responding rapidly to security risks and unauthorised access.

Real-World Applications and Metrics

  1. Fraud Detection in Financial Services: A leading European insurer implemented behavioural biometrics to analyse how claimants interacted with online forms, detecting unnatural typing patterns and navigation behaviours indicative of fraud. This led to a 40% reduction in fraudulent claims within six months.
  2. Enhanced Customer Experience: An American health insurance company used behavioural biometrics for customer authentication, recognising users based on their interaction patterns with the company’s app. This approach reduced average customer service call times by 30%, significantly improving customer satisfaction.
  3. Risk Assessment Accuracy: A life insurance provider in Asia incorporated behavioural biometrics to refine risk assessment models by analysing lifestyle patterns affecting health and longevity. This led to more accurate premium calculations and personalised insurance packages.

Privacy and Needed Ethics

The application of AI in behavioural biometrics comes with notable ethical and data privacy issues. Even though these systems increase security, they need to be applied with care and responsibility, given the nature of the data. Security, user privacy, and inclusivity need to be balanced very carefully. Approaches like federated learning and edge computing provide the means for AI models to be trained on the user’s device, which greatly minimises the danger of breaches and strengthens compliance with privacy laws such as the GDPR.

Challenges and Future Outlook

Though promising, behavioural biometrics struggle with privacy and accuracy, as well as general user acceptance. Businesses in the field need to fortify protections and gain consent from users to avoid oversharing sensitive information. Usability and security have to be balanced because excessive false acceptances or rejections can undermine user trust in the system. Building trust requires addressing cultural differences along with the need for openness. Incorporating ethics focused on privacy, consent, and robust security makes the system more reliable.

With the advancement of technology, the integration of AI with behavioral biometrics will enhance authentication systems across numerous industries, providing users the ideal security and convenience.

RISC-V Chip Adoption Driven by a Strategic Policy Set to Launch by China’s 2025

In a landmark move poised to reshape its technological landscape, China is gearing up to launch its inaugural national policy championing the adoption of RISC-V chips. This strategic initiative, slated for release as early as March 2025, marks a significant step in the country’s quest to pivot away from Western-dominated semiconductor technologies and bolster its homegrown innovation amid escalating global tensions.

Insiders familiar with the development reveal that the policy has been meticulously crafted through a collaborative effort involving eight key government entities. Among them are heavyweights like the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Science and Technology, and the China National Intellectual Property Administration. Together, these bodies aim to cement RISC-V’s role as a cornerstone of China’s burgeoning tech ecosystem, fostering an environment ripe for domestic chip development and deployment.

The mere whisper of this policy has already sent ripples through the financial markets, igniting a wave of optimism among investors. On the day of the leak, Chinese semiconductor stocks staged an impressive rally. The CSI All-Share Semiconductor Products and Equipment Index, which had been languishing earlier, reversed course to surge by as much as 2.5%. Standout performers included VeriSilicon, which hit its daily trading cap with a 10% spike, alongside ASR Microelectronics, Shanghai Anlogic Infotech, and 3Peak, whose shares soared between 8.6% and an eye-catching 15.4% in afternoon trading.

At the heart of this policy push lies RISC-V, an open-source chip architecture that’s steadily carving out a global niche as a versatile, cost-effective rival to proprietary giants like Intel’s x86 and Arm Holdings’ microprocessor designs. Unlike its high-powered counterparts, RISC-V is often deployed in less demanding applications—think smartphones, IoT devices, and even AI servers—making it a pragmatic choice for a wide swath of industries. In China, its allure is twofold: slashed development costs and, critically, its freedom from reliance on U.S.-based firms, a factor that’s taken on heightened urgency amid trade restrictions and geopolitical friction.

Until now, RISC-V’s rise in China has been organic, driven by market forces rather than official mandates. This forthcoming policy changes the game, thrusting the architecture into the spotlight as a linchpin of Beijing’s broader campaign to achieve technological self-sufficiency. The timing is no coincidence—U.S.-China relations remain strained, with American policymakers sounding alarms over China’s growing leverage in the RISC-V space. Some U.S. lawmakers have even pushed to curb American companies’ contributions to the open-source platform, fearing it could turbocharge China’s semiconductor ambitions.

China’s RISC-V ecosystem is already buzzing with activity, spearheaded by homegrown innovators like Alibaba’s XuanTie division and rising star Nuclei System Technology, both of which have rolled out commercially viable RISC-V processors. The architecture’s flexibility is proving especially attractive in the AI sector, where models like DeepSeek thrive on efficient, lower-end chips. For smaller firms chasing affordable AI solutions, RISC-V offers a tantalizing blend of performance and price—a trend that could gain serious momentum under the new policy.

Sun Haitao, a manager at China Mobile System Integration, underscored the pragmatic appeal of RISC-V in a recent statement. “Even if these chips deliver just 30% of the performance of top-tier processors from NVIDIA or Huawei,” he noted, “their cost-effectiveness becomes undeniable when you scale them across multiple units.” This scalability could prove transformative for industries looking to maximize output without breaking the bank.

As China prepares to roll out this groundbreaking policy, the global tech community is watching closely. For Beijing, it’s a calculated gambit to secure its place at the forefront of the semiconductor race—one that could redefine the balance of power in a world increasingly divided by technology.

Opinion: AI Will Never Gain Consciousness

Artificial intelligence will never become a conscious being due to the lack of aspirations that are inherent in humans and other biological species. This statement was made by Sandeep Naiwal, co-founder of Polygon and the AI ​​company Sentient, in a conversation with Сointelegraph.

The expert does not believe that the end of the world is possible due to artificial intelligence gaining consciousness and seizing power over humanity.

Nailwal was critical of the theory according to which intelligence arises accidentally as a result of complex chemical interactions or processes. Although they can lead to the emergence of complex cells, there is no talk of the emergence of consciousness, the entrepreneur noted.

The co-founder of Polygon also expressed concerns about the risks of surveillance of people and the restriction of freedoms by centralized institutions with the help of artificial intelligence. Therefore, AI should be transparent and democratic, he believes.

[…] Ultimately, global AI, which can create a world without borders, must be controlled by every person,” Nailwal emphasized.

He added that everyone should have a personal artificial intelligence that is loyal and protects against the neural networks of influential corporations.

Recall that in January, Simon Kim, CEO of the crypto venture fund Hashed, expressed confidence that the future of artificial intelligence depends on a radical shift: opening the “black box” of centralized models and creating a decentralized, transparent ecosystem on the blockchain.