Why hyper-personalised UX is the future – and how AI powers it

Author:  Victor Churchill, a Product Designer at Andela. He is an exceptional Product Designer with over 7 years of experience identifying and simplifying complexities with B2B, B2C and B2B2C solutions. Victor has a proven track record of conducting research analyses utilising these insights to drive business growth and impact.

***

After taking the world by storm in just a few years AI has now come to change marketing and UX design. With people adapting to new advertising techniques too quickly the only way to really engage with potential customers is to create a meaningful connection on a deeper level. That’s where hyper-personalised user experiences step in.

In a world of marketing, hyper-personalised user experiences are a way to drive conversions by presenting to viewers highly relevant offers or content. At the heart of this technology lies powerful AI that crunches vast amounts of user data in order to tailor content to a specific user. To do this it goes through multiple sources of information like user behavior data (clicks, search queries, time spent on each page), demographic and profile data (age, location, language), and contextual data (device model, time of day, and browsing session length).

After gathering all the data it could collect, AI segments users into different categories based on the goals of the campaign: frequent buyers and one-time visitors, local and international shoppers, and so on. Algorithms then analyse potential ways for improving user experience. Based on the results, the software decides to prioritise one form of content or feature over the other for said user. For example, a fintech app notices that a user frequently transfers money internationally and starts prioritising currency exchange rates in their dashboard.

As Senior Product Designer at Waypoint Commodities, I always draw parallels between hyper-personalised user experiences and the way streaming platforms like Netflix and Spotify operate. These services personalise product recommendations based on the customer’s spending preferences and tastes. This way users get experiences that feel custom-made, which can dramatically increase engagement, time spent on platform, and conversion rates. A report from McKinsey revealed that 71 percent of consumers expected companies to deliver personalised interactions, and 76 percent got frustrated when it didn’t happen. The numbers are even higher if we speak about the US market, where up to 78% of customers are more likely to recommend brands with hyper-personalised interactions.

This trend is most visible in fintech and e-commerce, where user experience is critical for driving conversions, building trust, and keeping customers engaged. In these spheres additional friction such as irrelevant recommendations, or a lack of personalization can lead to lost revenue and abandoned transactions.

In order to create a hyper-personalised design it is important not to overstep. A study by Gartner revealed that poor personalisation efforts risk losing 38% of existing customers, emphasising the need for clarity and trust in personalisation strategies. The situation can backfire if users feel like they are being constantly watched. To avoid this, I always follow a few simple but essential principles when designing for personalisation.

Be transparent.

When you show something hyper-personalised to your customer, add a simple note saying ‘Suggested for you based on your recent purchases‘ or ‘Recommended for you based on your recent activity‘. This way users are informed about the channels you get information from, and your recommendations don’t come as a shock for them.

Don’t forget to leave some control to the user.

Even if you fine-tune your system to perfectly detect the needs of customers, some people can still find the recommendations irrelevant. This is why it’s important to allow customization through buttons like ‘Stop recommending this‘ and ‘Show more like this‘.

Don’t overuse personal data.

Even though sometimes it can feel like everybody is used to sharing data with advertisers, violating personal borders can usually lead to unsatisfying results. According to a survey by KPMG, 86% of consumers in the US expressed growing concerns about the privacy of their personal data. And 30% of participants said they are not willing to share any personal data at all.

Be subtle in your personalization and don’t implement invasive elements that mention past interaction too explicitly or use sensitive data. For example, don’t welcome a user with the words ‘Worried about your credit score?‘ or ‘Do you remember the shirt you checked out at 1:45 AM last night?‘.

Be clear about AI usage.

AI-driven personalisation lifts revenue by 10-15% on average, reports say. However, if the majority of the decisions in the recommendation system is made by artificial intelligence, people have a right to know that. Don’t put too much stress on it — just mention the important part with a little message saying that your suggestions are powered by AI. This way you can avoid misunderstanding.

Even though current systems already work well at detecting the needs of the customers, there’s still room for improvement. The hyper-personalised user experiences of the future could learn to read new data like voice, gestures and emotions or even anticipate needs before users even express them. It is clear that in the future AI-driven UX design will only become better, and now is the best time to embrace this technology.

AI in Cybersecurity: Principles, Mitigation Frameworks, and Emerging Developments

What is AI in Cybersecurity?

The use of intelligent algorithms and multitasking models to AI determining the cyber threat scenarios deals with electronic warfare intelligently. It is called Artificial Intelligence (AI). The sophisticated cybersecurity frameworks powered by AI are not only capable of preemptively analyzing and responding within split seconds but also detecting massive volumes of incoming data, categorizing relevant information, and sifting through troves of data.

AI’s capabilities alongside responding to other security measures is Supporting Measures can be understood in the following ways. Processing tasks such as log review and vulnerability scans can be executed with ease. With AI, the cybersecurity personnel can focus on more complex tasks as they are provided with agile bots who are able to take care of time level, strategy deployment, and simulation plans. Real time attack alerts, AI’s role in automation plays an important role in threat detection also with advanced detection AI systems, threats can be dealt with in real time. Quieter and emergency response solutions can be set. In addition, the evolving nature of threats enables AI systems to be adaptable.

AI in cybersecurity boosts vulnerability management and reinforces the ability to counter emerging cyber attacks. Real-time monitoring and proactive readiness helps mitigate damages, AI technologies shift through behavioral patterns and automates phishing detection and monitoring. AI learns from previous changes and identifies emerging bases to emerging bases, thus enhancing defensive posture and claiming the sensitive information.

How Can AI Assist in Avoiding Cyberattacks?

AI in cybersecurity enhances cyber threat intelligence and allows security professionals to:

  • Look for signs of looming cyberattack
  • Improve their cyber defenses
  • Examine usage data like fingerprints, keystrokes, and voices to confirm user identity
  • Uncover evidence  – or clues – about specific cyber attackers and their true identity

Is Automating Cybersecurity a Risk?

Currently, monitoring systems require more human resources than necessary. AI technology can assist in this area and greatly improves multitasking capabilities. Using AI to track threats will optimize time management for organizations under constant pressure to identify new threats, further enhancing their capabilities. This is especially important in light of modern cyberattacks becoming more sophisticated. 

The information security field sits on a treasure trove of prior cases in automation technology, which have made ample use of AI elsewhere in business operations. Thus there is no danger in using AI for automating cybersecurity. For instance, in automating the onboarding process, Human Resources grant new employees access to company assets and provide them the resources requisite to execute their roles using sophisticated software tools. 

AI solutions allow companies with limited numbers of expert security personnel to maximize their expenditures on cybersecurity through automation. Organizations can now fortify their operations and improve efficiency without having to find qualified skilled personnel.

The advantages of implementing AI automation in cybersecurity are:

  • Saving on costs: The integration of AI technology with cybersecurity enables the faster collection of data which aids in the incident response management, making it more agile. Furthermore, the need for security personnel to perform monotonous manual work is eliminated, allowing them to engage in more strategic tasks that are advantageous to the company. 
  • Elimination of oversight: A common weakness of conventional security systems is the reliance on an operator which is always prone to error. AI technology in cybersecurity eliminates most of the security processes that require intervention by people. Resources that are truly in demand can then be allocated where they are needed most, resulting in superior outcomes.
  • Improved strategic thinking: Automated systems in cybersecurity assist an organization in pinpointing gaps in its security policies and rectifying them. This allows the establishment of procedures aimed at achieving a more secure IT infrastructure.  

Despite all of this, organizations must understand that cybercriminals adapt their tactics to counter new AI-powered cybersecurity measures. Cybercriminals use AI to launch sophisticated and novel attacks and introduce next-generation malware designed to compromise both traditional systems and those fortified with AI.

The Role of AI in Cybersecurity

1. Password safeguards and user authentication  

Cybersecurity AI implements advanced protective measures for safeguarding passwords and securing user accounts through effective authentication processes. Logging in using web accounts is commonplace nowadays, especially for users who wish to obtain products or for those who want to submit sensitive information using forms. These online accounts need to be protected using sophisticated authentication mechanisms to ensure sensitive information does not fall into the wrong hands.  

Automated validation systems using AI technologies such as CAPTCHA, Facial Recognition, and Fingerprint Scanners allow organizations to confirm whether a user trying to access a service is actually the account owner. These systems counter cybercrime techniques like brute-force attacks and credential stuffing which could otherwise jeopardize the entire network of an organization.

2. Measures to Detect and Prevent Phishing 

Phishing shows up on the business risk radar as a threat that many industries have to deal with, which makes them susceptible within any business. AI has the ability to help firms discover malice and determine anomalies in messages through email security solutions. It has the ability to analyze emails both in context and content to determine in a fraction of time whether they are spam, phishing masquerades or genuine emails. AI makes identifying signs of phishing fast and easy through spoofing, forged senders and domain name misspellings.

Understanding how users communicate, their typical behavior, and the wording that they use becomes easier for the AI that has already gotten past the ML algorithm techniques training period. An advanced spear phishing threat is more challenging to tackle, as the attackers impersonate high-profile companies such as company CEO’s, and it becomes critical how you prevent it. To stop the access of leading corporate account incursion, AI has the ability to identify irregularities in user activity that can cause such damage, and thereby suppress possibilities of spear phishing.

3. Understanding Vulnerability Management 

Each year, newly discovered vulnerabilities are on the rise because of the smarter ways cybercriminals use to hack. With the high volume of new vulnerabilities everyday, businesses struggle to use their traditional systems to keep high risk threats at bay. 

UEBA (User and Entity Behavior Analytics), an AI-driven security solution, allows businesses to monitor the activities of users, devices, and servers. This enables detection of abnormal activities which can be potential zero day attacks. AI in cybersecurity gives businesses the ability to defend themselves from unpatched vulnerabilities, long before they are officially reported and patched.

4. Network Security

Network security requires the creation of policies and understanding the network’s topography, both of which are time-intensive processes. An organization can enact processes for allowing connections that are easily verified as legitimate and scrutinizing those that require deeper inspection for possible malice after policies are set. Organizations can also implement and enforce a zero trust approach to security due to the existence of these policies.  

On the other hand, policies across different networks need to be created and managed, which is manual and very time-consuming. Lack of proper naming conventions for applications and workloads means that security teams would spend considerable time figuring out which workloads are tied to specific applications. Over time, AI is capable of learning an organization’s network traffic patterns, enabling it to recommend relevant policies and workloads.

5. Analyzing actions

Analyzing actions allows firms to detect emerging risks alongside recognized weaknesses. Older methods of threat detection monitoring security perimeters with attack patterns and compromise indicators are inefficient due to the ever-growing amount of attacks launched by cyber criminals each year.  

To bolster an organization’s threat hunting capabilities, behavioral analytics can be implemented. It processes massive amounts of user and device information by creating profiles of applications with AI models which operate on the firm’s network. Such profiles enable firms to analyze incoming data and detect activities that can be harmful.

Leading Cybersecurity Tools Enhanced by AI Technology  

The application of AI technology is now commonplace in various cybersecurity tools to boost their efficient defensive capabilities. These include:  

1. AI-Enhanced Endpoint Security Tools  

These tools help prevent malware, ransomware, and other malicious activity by using AI to detect and mitigate threats on laptops, desktops, and mobile phones.  

2. AI Integrated NGFW  

AI technologies into Next-Generation Firewalls (NGFW) increase their capabilities in threat detection, intrusion prevention, and application control safeguarding the network.  

3. SIEM AI Solutions  

The AI-based SIEM solutions help contextualize multiple security logs and events, making it easy for security teams to streamline threat detection, investigation, and response which traditionally would take longer.  

4. AI-Enhanced Cloud Security Solutions  

These tools use AI to enforce protective measures on data and applications hosted in the cloud, ensuring safety, compliance and data sovereignty.  

5. AI Enhanced Cyber Threat Detection NDR Solutions  

Cyber Threat Detection NDR Solutions that have AI abilities enabled monitor network traffic for sophisticated threats to ensure efficient response inline with network security policies.

The Upcoming Trends Of AI In Cybersecurity  

The use of technologies such as machine learning and AI are increasingly pivotal in dealing with threats in cyber security. This is mainly because cybernetic technologies are capable of learning aid functions from any pieces of information fed to them. More so, the steps and measures put in place need to make sure they have adapted to the unique challenges brought in by new vulnerabilities.

How To Implement Generative Artificial Intelligence In Cybersecurity  

Modern companies are adopting generative Technology and AI systems to strengthen existing cybersecurity plans. The use of generative technology mitigates risks by creating new data while ensuring the existing data is preserved.  

  • Effective Testing Of Cybersecurity Systems: Generative technologies can be used by organizations to create and simulate a variety of new data which can be used in testing incident response plans  and different classes of cyber attack defense strategies. Identifying system deficiencies through prior testing greatly increases a firm’s preparedness in case a real attack is launched.  
  • Anticipating Attacks Through Historical Data: Previous historical data containing attack and response tactics can be used to generate predictive strategies through the use of generative AI. These custom-built models are tailored to the unique requirements of a given firm aiding the firm stay a step ahead aloof from malicious hackers. 
  • Providing Advanced Security Techniques: Augmenting the current mechanisms for threat detection by applying predictive analysis for the creation of hypothetical scenarios that mimic real offense strategies improves a model’s ability to detect real life cases while flagging even the faintest and newest suspicious activities.

Generative AI is powerful in the modern-day battleground of technology in fighting cyber threats. Its ability to simulate situations, foresee possible attacks, and increase threat detection helps defenders of an organization be one step ahead of danger.

Advantages of Artificial Intelligence (AI) in the Mitigation of Cyber Risks

Adopting AI tools in cybersecurity offers organizations enormous capabilities intended to help in risk management. Some of the advantages include: 

Continuous education: AI learning is one of its powerful features. Technologies such as deep learning and ML provide AI the means to understand the existing normal operations and detect deviations from the norm which are so neural and malignant behaviors. AI technology makes it increasingly challenging for hackers to circumvent an organization’s defenses which increases the level of ongoing learning on the systems. 

Identifying undiscovered risks: Threats that are unknown can be detrimental to any given organization. With the introduction of AI, all mapped risks together with the ones that have not been identified can be subsequently addressed before said risks become an issue, which provide a remedy to these security gaps that software providers have yet to patch.

Vast volumes of data: AI systems are capable of deciphering and understanding large volumes of data people in the security profession may not be able to comprehend. As a result, organizations are able to automatically detect new sophisticated threats hidden within enormous datasets and amounts of traffic.

Improved vulnerability management: Besides detecting new threats, AI technology allows many organizations to improve the management of their vulnerabilities. It enables more effective assessment of systems, enhances problem-solving, and improves decision-making processes. AI technology can also locate gaps within networks and systems so that organizations can focus on the most critical security tasks.

Enhanced overall security posture: The cumulative risks posed by a range of threats from Denial of Service (DoS) and phishing attacks to ransomware are quite complex and require constant attention. Manually controlling these risks is very tedious. With AI, organizations are now able to issue real-time alerts for various types of attacks and efficiently mitigate risks. 

Better detection and response: AI in Cyber Security aids in the swift detection of untrusted data and with more systematic and immediate responses to new threats, aids in protection of the data and networks. Cyber Security Systems powered by AI enables faster detection of threats, thus improving the systemic reaction to emerging dangers.

IT vs OT Cybersecurity

Defining Operational Technology (OT)

Operational technology (OT) refers to the use of software and hardware to control and maintain processes within industries. OT supervises specialized systems, also termed as high-tech specialist systems, in sectors such as power generation, manufacturing, oil and gas, robotics, telecommunication, waste management, and water control.  

One of the most common types of OT is industrial control systems (ICS). ICS are used to control and monitor industrial processes and integrate real-time data gathering and analysis systems, like SCADA systems. These systems often employ PLCs, which control and monitor devices like productivity counters, temperature sensors, and automatic machines using data from various sensors or devices.  

Overall access to OT devices is best limited to small organizational units and teams. Due to the specialized nature of OT, it often operates on tailored software rather than generic Windows OS.  

Safeguarding the OT domain employs SIEM solutions for real-time application and network activity oversight, event security, application monitoring, and even advanced firewalls which manage influx and outflux traffic to the main control network.

Defining Information Technology (IT)  

Technology is a field that involves the creation, administration and use of the hardware and software systems, networks, as well as the computer utilities. Nowadays, the application of IT is essential to automations in business processes as it facilitates communication and interaction between human beings and systems as well as between various machines.  

IT can be narrowed down to three core focuses:  

  • Operations: Routine supervision and administration of the IT departments which has their issues ranging from hardware and network support to application and system security support auditing to technical support help desk services.  
  • Infrastructure maintenance: Setting up and maintaining infrastructure equipment which includes cabling, portable computers, voice telephone and telephone systems as well as physical servers.  
  • Governance: This deals with aligning the information technology policies and the services with the IT needs of the organization and with its demand.

The Importance of Cybersecurity in OT and IT

Both operational technology (OT) and information technology (IT) focus on the security of devices, networks, systems, and users.  

In IT, cybersecurity protects data, enables secure user logins, and manages potential cyber threats. Similarly, OT systems also require cybersecurity in place to safeguard critical infrastructures and mitigates the risk of unanticipated delays. Manufacturing plants, power plants, and water supply systems rely heavily on continuous uptime, and any unexpected pauses can cost unexpected downtime.  

The security needs become vital with increased interconnectivity of these systems. New cybercriminal exploits are continuously emerging, permitting access to industrial networks. Increased attempts to breach these systems are rising; more than ninety percent of organizations operating OT systems reported experiencing at least one significant security breach within two years of deployment, according to a Ponemon Institute study. Additionally, over fifty percent of these organizations reported their OT system infrastructure sustained cyber-attacks causing the equipment or plant to go offline.  

The World Economic Forum classifies cyber-attacks involving OT systems and critical infrastructures as one of the five major threats to global risks, next to climate change, geopolitical tensions, and natural disasters.

OT Security vs IT Security: An Overview  

The distinction between OT security and IT security is becoming increasingly vague as OT systems introduce connected devices, and due to the rise of IoT (Internet of Things) and IIoT (Industrial Internet of Things) which interlinks the devices, machines, and sensors sharing real-time information within enterprises.  

As with everything in cybersecurity, there are unique differentiations of concerns to IT security and OT security. These differ from the systems in question to the risks at hand.

Differences Between OT and IT Cybersecurity  

There are marked differences in OT and IT. Firstly, OT systems are autonomous, self-contained, isolated, and run on proprietary software. Whereas, IT systems are connected, do not possess autonomy, and usually operate on iOS and Windows.  

1. Operational Environment  

IT and OT cybersecurity have differences in operational regions. OT cybersecurity protects industrial environments known to incorporate tooling, PLCs, and intercommunication using industrial protocols. OT systems are not built on standard operating systems, and most lack traditional security hardware and software. They are heterogeneously programmed unlike most computers.   

On the other hand, IT cybersecurity safeguards peripherals like desktops, laptops, PC speakers, desktop printers, and mobile phones. It protects environments like the cloud and servers using bespoke antivirus and firewall solutions. Communication protocols used include HTTP, RDP, and SSH.

2. Safety vs Confidentiality  

Confidentiality and safety are two distinctive sectors of an organization’s IT and OT Security Practices. Information Technology (IT) security concentrates more on confidentiality of information transmitted by the organization. OT cyber security focuses on protecting critical equipment and processes. The automation systems in any industry demand high attention supervision to avoid breakdown and maintain operational availability.  

3. Destruction vs. frequency  

There is a cyber security focus which sets up protection against different types of security incidents. Cyber security for OT (Operational Technology) is designed to safeguard against catastrophic incidents. The OT systems usually have limited access points. The consequence of a breach, however, is severe. Even minor incidents have the potential to cause widespread devastation; for instance, plunging an entire nation into a power outage or contaminating water systems.  

Unlike OT, IT systems have numerous gateways and touchpoints because of the internet, all of which can be exploited by cyber criminals. This presents an abundance of security risks and vulnerabilities.

4. Frequency of Patching

Both OT and IT systems differ significantly. Furthermore, their patching requirements also differ greatly. Due to the specialized nature of OT networks, they are patched infrequently; doing so typically means a full stop of production workflow. Because of this, not all components need to be updated, which allows components to operate with unpatched vulnerabilities along with an increased risk of a successful exploit. 

In contrast, IT components undergo rapid changes in technology, requiring frequent updates. IT vendors often have set dates for patches and providers like Apple and Microsoft update their software systems periodically to bring their clients to current versions.

Overlapping Characteristics of OT and IT Cybersecurity

Although they are fundamentally different, IT vs OT Cyber Security both relate to the ever-emerging convergence of both worlds.

OT devices were secured previously by keeping them offline and only accessible to employees through internal networks. Recently, IT systems have been able to control and monitor OT systems, interfacing them remotely over the internet. This helps organizations to more easily operate and monitor the performance of components in ICS devices, enabling proactive replacement of components before extensive damage occurs.

IT is also very important for providing the real-time status of OT systems and correcting errors instantaneously. This mitigates safety industrial risks and resolves OT problems before they impact an entire plant or manufacturing system.

Why IT And OT Collaboration Is Important

The integration of ICS into an organization enhances efficiency and safety; however, it elevates the importance of IT vs. OT security collaboration. The absence of adequate cybersecurity in OT systems poses risks of cyber threats as organizations increase the levels of connectivity. This is especially true in today’s cyberspace where hackers develop sophisticated methods for exploiting system vulnerabilities and bypassing security defences.

IT security can mitigate OT vulnerabilities by using its own systems for monitoring cyber threats as well as the mitigation strategies deployed to them. In addition, the integration of OT systems brings a reliance on baseline IT security controls due to the need to minimize the impacts of attacks.

IT Sector Sees Mass Layoffs as Automation and Profitability Pressures Mount

The global IT industry is undergoing significant workforce reductions, with over 52,000 employees laid off in the first months of 2025 alone. According to Layoff.fyi – which tracks publicly reported job cuts across 123 technology companies – nearly 25,000 of those layoffs occurred in April 2025.

Intel has announced plans for the year’s largest downsizing: cutting 20% of its workforce, or roughly 22,000 positions, out of approximately 109,000 employees worldwide. This move echoes a broader pattern of layoffs that began in mid-2024, when more than 25,000 IT workers lost their jobs in August 2024 and 34,000 in January 2024. Over all of 2024, the industry averaged about 12,700 layoffs per month, compared to 22,000 monthly cuts in 2023.

Normalization, Not Decline, Experts Say


Analysts describe the trend as a “normalization” of employment levels rather than evidence of an industry downturn. They note that a surge of investor funding in recent years fueled rapid hiring – often outpacing companies’ ability to turn a profit. As unprofitable ventures folded or restructured, staff were inevitably released back into the labor market.

Automation’s Growing Role


Approximately 30% of these layoffs are attributed to the swift advancement of automation technologies – beyond just AI. For instance, automated design tools now enable individual designers to build and maintain websites that once required entire teams of developers. As these tools become more capable and widespread, the demand for certain roles continues to shrink, reshaping the IT workforce landscape.

Zuckerberg Predicts AI Will Replace Mid-Level Developers in 2025

Meta CEO Mark Zuckerberg believes artificial intelligence is quickly advancing to the point where it can handle the work typically done by mid-level software developers – potentially within the year.

Speaking on The Joe Rogan Experience podcast, Zuckerberg noted that Meta and other major tech companies are developing AI systems capable of coding at a mid-tier engineer’s level. However, he acknowledged current limitations, such as AI occasionally generating incorrect or misleading code – commonly known as “hallucinations.”

Other tech leaders are equally optimistic. Y Combinator CEO Garry Tan has praised the rise of “vibe coding,” where small teams leverage large language models to build complex apps that once needed large engineering teams.

Shopify CEO Tobi Lütke has gone as far as requiring managers to justify new hires if AI could perform the same tasks more efficiently. Anthropic co-founder Dario Amodei has made a bold prediction: within a year, AI will be capable of writing nearly all code.

At Google, CEO Sundar Pichai recently revealed that over 25% of new code is now AI-generated. Microsoft CEO Satya Nadella reported a similar trend, with a third of the company’s code produced by AI.

Despite the enthusiasm, some experts urge caution. Cambridge University AI researcher Harry Law warns that over-reliance on AI for coding could hinder learning, make debugging harder, and introduce security risks without proper human oversight.

LinkedIn Replaces Keywords With AI, Enhancing Job Search Efficiency

LinkedIn has unveiled a transformative update to its job search functionality, phasing out traditional keyword-based searches in favour of an advanced AI-driven system. This shift promises to deliver more precise job matches by leveraging natural language processing, fundamentally changing how job seekers and employers connect.

AI-Powered Job Matching

Gone are the days of rigid keyword searches. LinkedIn’s new AI system dives deeper into job descriptions, candidate profiles, and skill sets to provide highly relevant matches. According to Rohan Rajiv, LinkedIn’s Product Manager, the platform now interprets natural language queries with greater sophistication, enabling users to search with conversational phrases rather than specific job titles or skills.

For instance, job seekers can now input queries like “remote software engineering roles in fintech” or “creative marketing jobs in sustainable fashion” and receive tailored results. This intuitive approach eliminates the need to guess exact keywords, making the process more accessible and efficient.

Enhanced Features for Job Seekers

The update introduces several user-centric features designed to streamline the job search experience:

  • Conversational Search: Users can describe their desired role in natural language, and the AI will interpret and match based on context, skills, and preferences.
  • Application Transparency: LinkedIn now displays indicators when a company is actively reviewing applications, helping candidates prioritise opportunities with higher response potential.
  • Premium Perks: Premium subscribers gain access to AI-powered tools, including interview preparation, mock Q&A sessions, and personalised presentation tips to boost confidence and performance.

A New Era of Job Search Philosophy

LinkedIn’s overhaul reflects a broader mission to redefine job searching. With job seekers outpacing available roles, mass applications have overwhelmed recruiters. The platform’s AI aims to cut through the noise by guiding candidates toward roles that align closely with their skills and aspirations, fostering quality over quantity.

AI isn’t a magic fix for employment challenges, but it’s a step toward smarter, more meaningful connections,” Rajiv said. By focusing on precision matching, LinkedIn hopes to reduce application fatigue and improve outcomes for both candidates and employers.

Global Rollout and Future Plans

Currently, the AI-driven job search is available only in English, but LinkedIn has ambitious plans to expand to additional languages and markets. The company is also exploring further AI integrations to enhance profile optimisation and career coaching features.

This update marks a significant leap toward a more intelligent, user-friendly job search ecosystem, positioning LinkedIn as a leader in leveraging AI to bridge the gap between talent and opportunity.

Microsoft: AI Now Constitutes 30% of Company Code, Estimated to Reach 95% by 2030

The coding landscape at Microsoft is undergoing swift change owing to the evolving application of artificial intelligence. As outlined by Satya Nadella, the company’s CEO, AI makes up about 20 to 30 percent of the code within company repositories, and that figure could jump to 95% by 2030, especially for AI’s Python Language.

During the ‘LlamaCon’ conference in a dialogue with Mark Zuckerberg, Nadella also remarked on AI’s increasing prominence in software engineering task automation. He pointed out that Python retains the lead in AI-generated code, while languages such as C++ tend to lag far behind due to complexities in adoption.

Microsoft’s Chief Technology Officer Kevin Scott shares this view, predicting a long-term shift where AI will substantially dominate code writing, calling this an inevitable change in development workflows.

A Broader Industry Trend  

Microsoft isn’t the only one to experience this change. Just last week, Google’s CEO Sundar Pichai said that over 30 percent of Google’s code is also being AI generated. Neither of the tech companies, however, provided any insight on how those numbers are calculated, which opens them up to some interpretation.  

The concern with not measuring the contributions of AI accurately is that AI code generation is not uniform. Equality could be measured by how companies measure contributions—whether that’s by lines committed, code accepted, pull requests merged, etc.

The Main Takeaway

Although it’s possible to argue about the precise figures, one thing is clear: AI is increasingly becoming integrated within software engineering at leading tech companies. If the current trends continue, it seems we may be heading towards a time in the future where human developers engage more with problem-solving and design while AI does most of the coding.

Reddit Secretly Launched AI That Pretended to Be a Victim of Violence, an Opponent of BLM, and More

Scientists Secretly Launched AI on Reddit. Moreover, the bots pretended to be victims of violence, opponents of BLM, and manipulated people without their knowledge

The University of Zurich conducted a secret experiment on Reddit: AI bots posted emotional comments from people who didn’t exist  –  including a rape victim, a black opponent of the Black Lives Matter movement, and even someone who blamed religious groups for mass murder. All without the users’ knowledge.

Research without consent

The AI ​​bots operated in the popular Change My View subreddit, where participants openly ask for their views to be refuted.

The researchers did not warn moderators or users that their responses were being written by neural networks. Moreover, the comments collected data on users’ gender, age, location, and political views –  without their consent.

The university acknowledged that it had violated community guidelines, but considered the experiment “justified in light of its social significance.” Following a complaint from CMV moderators, the ethics committee limited itself to a verbal warning to the lead researcher and allowed the publication of a scientific article based on the experiment’s results.

Manipulations on behalf of a “psychologist” and a “patient”

The AI ​​pretended to be:

  • a rape victim;
  • a psychologist working with trauma;
  • a black user speaking out against BLM;
  • a person who experienced poor treatment abroad;
  • a witness to religious crimes.

The goal is to test how convincing neural networks can be in disputes. But the moderators themselves emphasize: the experiment went beyond the bounds of ethics and turned into a form of manipulation.

People entered into a discussion with fake characters, not knowing that their interlocutor was a machine collecting data.

No Consequences

The moderators filed a formal complaint, but the university only promised to increase oversight of future research. The article will not be published.

The administration believes that the “potential trauma is minimal” and the “value of the knowledge gained” is too high to be ignored.

What’s Next

Reddit users are outraged. Research conducted without consent and under the guise of sincere communication undermines trust in the platform.

New Age Technologies and Their Legal Rights: Analysing Autonomous AI Technologies AI from a Legal Perspective

Authored by Ludovico Besana, Senior Test Engineer

As a concept still emerging, autonomous AI agents are sure to become popular in Web3. Such bots have already started participating in DeFi and trading, proving the possibility of building entire M2M networks and ecosystems powered completely by AI. Regardless, the function of autonomous AIs creates an alarming concern for the existing law frameworks.

In this article, I will analyze the “life” and “death” cycle of an AI agent from a legal standpoint, with particular attention to the criteria for granting the identity of a digital cyborg, and propose the simplest approaches to defining the law concerning these beings.

Fundamental questions   

The idea of autonomous AI agents operating on blockchain technology is no longer a mere fantasy. One of the well-known examples is Terminal of Truth. An agent based on the Claude Opus model was able to persuade Marc Andreessen (a16z) to invest $50,000 in the launch of Goatseus Maximus (GOAT) token which the bot “religiously” promoted. GOAT is now trading at a market cap above $370 million.  

AI agents fitting seamlessly within the Web3 ecosystem is unsurprising. They may be restricted from opening bank accounts, but they can manage crypto wallets and X accounts. Currently, AI agents are primarily concerned with meme tokens, but the potential applications in decentralised governance, machine networks, oracles, and trading are enormous.  

The greater the efforts to make AI agents mimic human actions, the more challenges there will be from a legal standpoint. Every legal system needs to provide an answer to these questions: What legal status should AI agents have? Which entity, if any, holds the rights and the liabilities for their actions? In what manner can AI agents be structured and shielded from legal risks?

Fundamental Legal Issues with AI Agents

Lack of Legal Personality

Legal systems recognize only two types of entities: natural persons (people) and legal persons (companies), and autonomous AI agents do not fit into either category. Although they can imitate human behavior (e.g. through social media accounts), they do not have a body, moral consciousness, or legal identity.

Some theorists propose granting AI agents “electronic legal personality” — a status similar to that of corporations, but adapted for artificial intelligence. In 2017, the European Parliament even considered this issue, but the idea was rejected due to various concerns and risks that have not yet been addressed.

It is likely that autonomous AI agents will not receive the status of legal entities in the near future. However, as was the case with DAOs, some crypto-friendly jurisdictions will attempt to create special legal regimes and corporate forms tailored to AI agents.

Responsibility for actions and their consequences

Without legal personality, AI agents cannot enter into transactions, own property, or bear responsibility. For the legal system, they simply do not exist as subjects. However, they already interact with the outside world and perform legally significant actions that lead to legal consequences.

A logical question arises: who is the real party to the transaction, who acquires rights, and who is responsible for the consequences? From a legal perspective, an AI agent is currently a tool through which its owner or operator acts. Therefore, any actions of an AI agent are de jure actions of its owner, an individual or legal entity.

Thus, since an AI agent itself cannot acquire rights and responsibility, for its legal existence it needs a subject that is recognised by the legal system and is able to acquire rights and obligations in its place.

Regulatory Restrictions

The emergence of the first successful large linguistic model (LLM) — ChatGPT — has generated unprecedented interest in AI and machine learning. It was only a matter of time before regulation was adopted. In 2024, the European Union adopted the AI ​​Act, which remains the most comprehensive regulation in the field of artificial intelligence to date. In other countries, limited AI regulation has either already been adopted, is being introduced, or is planned.

The European Artificial Intelligence Act differentiates AI systems by their level of risk. For systems with zero or minimal risk, there is little or no regulation. In the case of a higher risk, AI is subject to restrictions and obligations, such as disclosing its nature.

AI agents that interact with third parties, for example by publishing posts or making on-chain transactions, may also fall under traditional regulation in the field of consumer protection, personal data, and other areas. In such cases, the activities of autonomous bots can be considered, for example, the provision of services. The lack of clear geography and global focus in the activities of agents complicates compliance.

Ethics

Since AI agents have limited capabilities and scope so far, their creators rarely think about ethics. Priority is given to autonomous (trustless) execution and speed, rather than deep ethical configuration.

However, having an “ethical compass” when making autonomous decisions in high-risk areas such as finance, trade, and management is at least desirable. Otherwise, erroneous data in the training set or trivial errors in configuration can lead to the agent’s actions causing harm to people. The higher the autonomy and discretion of the AI ​​agent, the higher the risks.

Legal Structuring of AI Agents

Workable legal models for AI agents are of great importance for innovation, the development of the field as a whole, and the emergence of more advanced bots. While cryptocurrencies can already be called a regulated industry, in the case of AI agents, legal structuring is complicated by the fact that the industry is not standardized, so it requires a creative approach.

Approach to Structuring

In my opinion, one of the main goals of legal structuring of an autonomous AI agent should be to acquire its own legal personality and legal identity, independent of its creator. In this regard, the question arises: at what point can we consider that an AI agent really has these characteristics?

Every developer strives to ensure that their agent is as close as possible to a real person acting independently. It is logical that they would like to provide agents with freedom from a legal point of view. To achieve this, in my opinion, two key conditions must be met. First, the AI ​​agent must be independent not only in making its own decisions, but also in the ability to implement them in a legal sense – to carry out its will and make final decisions regarding itself. Second, it must have the ability to independently acquire rights and obligations as a result of its actions, independently of its creator.

Since the AI ​​agent cannot be recognized as an individual, the only way for it to achieve legal personality at the moment is to use the status of a legal entity. The agent will achieve legal personality when it can, as a full-fledged person, make independent decisions and implement them on its own behalf.

If successful, this order of things will bring the AI ​​agent to life from a legal point of view. Such a digital person, having received legal existence, can well be compared to a digital cyborg. A cyborg (short for “cybernetic organism“) is a creature that combines mechanical-electronic and organic elements. In a digital cyborg, the mechanical part is replaced by a digital one, and the organic part is replaced by people who participate in the implementation of its decisions.

Our digital cyborg will consist of three key components:

  • AI agent – electronic brain;
  • corporate form – legal body;
  • people involved in performing tasks – organic hands.

The Challenges of Corporate Form

Traditional legal entity forms, such as LLCs and corporations, require that both the ultimate ownership and ultimate control reside in humans. Corporate structures are not designed for ephemeral digital identities, which brings us to the central challenge of legally structuring blockchain AI agents: the challenges of corporate form.

If we want to give an AI agent a legal identity through a corporate form and ensure its independence and autonomy within that structure, we need to be able to eliminate human control over such an entity. Otherwise, if ultimate control resides with humans, the AI ​​becomes a tool rather than a digital person. We also need to ensure that in cases where a human is required to implement an AI decision, such as signing a contract or performing administrative tasks, that human cannot block or veto the AI ​​agent’s decision (barring a “machine uprising”).

But how can this be done when traditional corporate forms require that people own and manage agents? Let’s find out.

Three key aspects of the framework

1. Blockchain environment

AI agents are capable of independently performing on-chain transactions, including interaction with multisig wallets and smart contracts. This allows the AI ​​agent to be assigned a unique identifier – a wallet, through which it will give reliable instructions and commands to the blockchain. Without this, the existence of a real digital cyborg is not yet possible.

2. Autonomy and freedom of action

To maintain the full autonomy of the digital cyborg, it is important that people involved in the management of the legal structure cannot interfere with the actions of the AI ​​agent or influence its decisions. This ensures that the artificial intelligence retains freedom of action and is able to implement its own will, and requires the adoption of both legal and technical measures.

For example, in order for the AI ​​agent to truly own and control the blockchain wallet, the wallet can be created in a secure execution environment (TEE). This ensures that no human has access to the wallet, its seed phrase, or its assets. From a legal perspective, the corporate documents of the legal entity used as a wrapper for the AI ​​must provide for the correct distribution of control and authority, as well as security mechanisms that exclude human intervention and can be changed only in a limited number of cases.

3. Human Enforcers

Since we still live in a legal world, some decisions will require the AI ​​agent to involve human enforcers. This means that the AI ​​will instruct officials on what actions to take. This view of things changes the traditional hierarchy, since in our scenario, the AI ​​essentially gains control over humans, at least within its own corporate structure.

This aspect is perhaps the most interesting, since it requires an unconventional approach. One could even say that this state of affairs violates Isaac Asimov’s Second Law of Robotics, but I doubt anyone really cares about that right now. Besides, adequate emergency mechanisms and a proper “ethical compass” solve this problem, at least at this stage.

AI wrappers — legal structures for agents working on the blockchain

As we have already found out, traditional corporate structures are not suitable for our purposes and do not allow us to achieve the desired result. Therefore, below we will consider the structures that were developed for DAO and blockchain communities — these are both classic structures adapted for Web3 and specialized corporate forms for decentralized autonomous organizations.

From the point of view of the creator of the AI ​​agent, legal structuring allows separating the agent from the creator, obtaining limited liability through a corporate structure, and also provides the opportunity to plan and optimize taxes and financial risks.

Foundations and trusts

A purpose trust and an ownerless foundation have many common characteristics, but differ in nature. A foundation is a full-fledged legal entity, while a trust is more of a contractual entity that often does not require state registration. We will consider these forms in the context of the most popular Web3 jurisdictions: foundations in the Cayman Islands and Panama, and trusts in Guernsey. The key advantages are the absence of taxes, high flexibility in procedures and management, and the ability to integrate blockchain into the decision-making process.

Both foundations and trusts require management in the form of individuals or legal entities. At the same time, they allow for the integration of smart contracts and other technical solutions into management. For example, management can be required to request approval from an AI agent through interaction with it, a smart contract, or a wallet controlled by AI. A more complex legal design will allow the agent to give instructions to management, including through “thoughts” generated by the AI. Thus, the use of trusts and foundations allows for the creation of more complex corporate structures adapted to AI agents and supporting their autonomy.

If necessary, the creator of an AI agent can act as a limited-power beneficiary, which will allow him to obtain financial rights and manage taxes without interfering with the agent’s activities and decisions.

Algorithmically-managed DAO LLCs

A DAO LLC is a special corporate form designed for decentralized organizations. However, it is possible to create a DAO LLC with only one participant, i.e. without a real organization. Below, we will consider this form in two of the most popular jurisdictions: Wyoming (USA) and the Marshall Islands.

We are talking specifically about algorithmically-managed DAO LLCs, since in such a company, all power can be concentrated in smart contracts, and not in human hands. This is an extremely important aspect, since in our case, smart contracts can be controlled by an AI agent, which allows artificial intelligence to transfer all power in this corporate form.

DAO LLCs also have flexibility in terms of procedures and corporate governance, so they can implement complex control and decision-making mechanisms, as well as reduce the level of human intervention in these processes.

Although the presence of a natural or legal person is still formally required, their powers may be significantly limited, for example to the execution of technical tasks, corporate actions, and the implementation of decisions made at the smart contract level. In this context, the role of a member (participant) of a DAO LLC may be performed by the creator of the AI ​​agent, which will allow him to obtain financial rights and, in the future, the authority to distribute the profits received.

Simpler AI agents

Classical corporate structures can also be used to structure simpler AI agents, such as trading bots, since in this case there is no need to subordinate the corporate form to the decisions and discretion of the AI ​​agent. In this case, artificial intelligence continues to be a means or tool of its creator and does not claim the status of a full-fledged digital cyborg.

In conclusion

Autonomous AI agents can change the blockchain industry and significantly accelerate innovation in almost all areas. So far, they are at the very beginning of the path, but the pace of development is colossal and very soon we will see real digital cyborgs – digital organisms with a stable thought process and their own identity. But this requires a combination of technical and legal innovations.

A $41,200 humanoid robot was unveiled in China

The Chinese company UBTech Robotics presented a humanoid robot for 299,000 yuan ($41,200). This is reported by SCMP.

Tien Kung Xingzhe was developed in collaboration with the Beijing Humanoid Robot Innovation Center. It is available for pre-order, with deliveries expected in the second quarter.

The robot is 1.7 meters tall and can move at speeds of up to 10 km/h. Tien Kung Xingzhe easily adapts to a variety of surfaces, from slopes and stairs to sand and snow, maintaining smooth movements and ensuring stability in the event of collisions and external interference.

The robot is designed for research tasks that require increased strength and stability. It is powered by the new Huisi Kaiwu system from X-Humanoid. The center was founded in 2023 by UBTech and several organizations, including Xiaomi. He develops products and applications for humanoids.

UBTech’s device is a step towards making humanoid robots cheaper, SCMP notes. Unitree Robotics previously attracted public attention by offering a 1.8-meter version of the H1 for 650,000 yuan ($89,500). These robots performed folk dances during the Lunar New Year broadcast on China Central Television in January.

EngineAI’s PM01 model sells for 88,000 yuan ($12,000), but it is 1.38 meters tall. Another bipedal version, the SA01, sells for $5,400, but without the upper body.

In June 2024, Elon Musk said that Optimus humanoid robots will bring Tesla’s market capitalization to $25 trillion.