Agentic AI: Pioneering Autonomy and Transforming Business Landscapes

(um) let’s make tech work for us

The new autonomous systems labelled AI agents represent the latest evolution of AI technology and mark a new era in business. AI agents, in contrast to traditional models of AI that simply follow commands given to them and emit outputs in a specific format, work with a certain level of freedom. According to Google, these agents are capable of functioning on their own, without needing supervision from a human all of the time. The World Economic Forum describes them as systems that have sensors to see and effectors to interact with the environment. AI agents are expected to transform industries as they evolve from rigid, rule-based frameworks to sophisticated models adept at intricate decision-making . With unprecedented autonomy comes equally unprecedented responsibility. The additional benefits agentic AI technology brings is accompanied by unique challenges that invite careful consideration, planning, governance, and foresight.

The Mechanics of AI Agents: A Deeper Dive

Traditional AI tools, such as Generative AI (GenAI) or predictive analytics platforms, rely on predefined instructions or prompts to deliver results. In contrast, AI agents exhibit dynamic adaptability, responding to real-time data and executing multifaceted tasks with minimal oversight. Their functionality hinges on a trio of essential components:

  • Foundational AI Model: At the heart of an AI agent lies a powerful large language model (LLM), such as GPT-4, LLama, or Gemini, which provides the computational intelligence needed for understanding and generating responses.
  • Orchestration Layer: This layer serves as the agent’s “brain,” managing reasoning, planning, and task execution. It employs advanced frameworks like ReAct (Reasoning and Acting) or Chain-of-Thought prompting, enabling the agent to decompose complex problems into logical steps, evaluate outcomes, and adjust strategies dynamically—mimicking human problem-solving processes.
  • External Interaction Tools: These tools empower agents to engage with the outside world, bridging the gap between digital intelligence and practical application. They include:
    • Extensions: Enable direct interaction with APIs and services, allowing agents to retrieve live data (e.g., weather updates or stock prices) or perform actions like sending emails.
    • Functions: Offer a structured mechanism for agents to propose actions executed on the client side, giving developers fine-tuned control over outputs.
    • Data Stores: Provide access to current, external information beyond the agent’s initial training dataset, enhancing decision-making accuracy.

This architecture transforms AI agents into versatile systems capable of navigating real-world complexities with remarkable autonomy.

Multi-Agent Systems: The Newest Frontier

The Multi-Agent System (MAS) market is in for tremendous growth – Mckinsey gold, with a staggering predicted growth rate of nearly 28% by 2030. Recently Bloomberg predicted AI breakthroughs will soon give rise to multi-agent systems, collaborative networks of AI agents working collaboratively towards ambitious objectives. These systems promise scalability, surpassing the benefits of single operating agents.

Artificial imagination translates to a smart city. Imagine multiple AI agents working alongside each other. One controlling the signals, another managing traffic directing units, and an extra aiding with alert responder rerouting. And all of this is happening in real time!

Governance is key here to prevent a systemic failure, restricting conflicting commands that can stipulate paralysis or possibly force dysfunctions.Guaranteeing multi-agent systems possibility should be able to provide semi-standardized freedom, however, the benefits are the only ensured protocols alongside need need to be prescribed to.

Opportunities and Challenges

The potential of AI agents is game changing, but their independence creates grave concerns. The World Economic Forum highlights some challenges that companies must deal with: 

  •  Risks Associated with Autonomy: Ensuring safety and reliability becomes all the more difficult as agents become more independent. For example, an unmonitored agent could execute resource allocation that would trigger operational failures with cascading effects. 
  • Lack of Accountability: As trust is already fragile due to the opaque reasoning of ‘black box’ behavior, it becomes even more crucial within high risk healthcare or finance situations. Ensuring transparency and accountability becomes non negotiable.
  • Risks Surrounding Privacy and Security: A lot of sensitive information puts trust in jeopardy. An agent functioning effectively only having access to a multitude of sensitive systems and datasets makes one pose the question ‘How do we grant sufficient permissions without compromising security?’ Strong policies are needed to enforce standards to protect sensitivity and privacy while preventing breaches.

Some of these risks require guarding by taking proactive measures like consistent monitoring, adhering to ethical AI principles, human-in-the-loop oversight for garnering vital AI decision-making frameworks to retain control. Organizations need to deploy auditing tools that monitor and alter agent paths during deviations to regain control and maintain organizational goals.

The Human-AI Partnership

Even though AI agents have an independent function, their purpose is not to replace human reasoning but rather to augment it. How the EU AI Act works reminds us of the necessity of human intervention in processes like security or legal compliance which are sensitive. The best situation is one where both humans and machines work together: agents perform the monotonous and repetitive work that requires processing large amounts of data—this enables humans to be more strategic, creative, and ethical.

In a logistics company, for instance, an AI agent may be able to optimize the delivery routes using traffic information autonomously, and a manager can use their judgment and approve the AI’s plan using customer preferences or other unforeseen factors. This enables human control and supervision to be maintained while efficiency is also enhanced.

Guidelines for Implementing Agentic AI Strategically

Both Google Analytics and the World Economic Forum are integrated around a central idea. The responsible use of AI agents can result in outstanding value creation and unparalleled innovation. To reap value with manageable risks, businesses need to employ the following practices:  

  • Develop Skills: Prepare workforce on the building, implementation, and administration of AI agents to ensure the effective application of AI technology.  
  • AI Ethics: Develop appropriate business governance frameworks that adhere to the international benchmarks, the EU AI Act for instance, requiring fair and accountable operations of the agents.  
  • Ethics Boundaries: Delegated agent discretion must come with boundary safeguards to eliminate boundary overreach or lateral decision making through establishing unique controls.  
  • Validation Check: Enable behavioral modification to organizational needs through active auditing of the agent, stress testing em, and refining organizational value objectives.


Final thoughts

The integration of reasoning and planning gives Agent AI the ability to act on its own, AI agents mark a pivotal leap in the evolution of artificial intelligence. Their potential to change industries like personalized healthcare or smart cities is phenomenal but using AI carelessly is a grave mistake. For AI programs to be dependable companions, trust and security must anchor their development.

Organizations that find the right balance to enable agents to innovate while maintaining human supervision would be the ones leading the charge in this technological revolution. Agentic AI transforms it from an ordinary business tool to a paradigm shift re-imagining autonomy. Such a future is bound to belong to those who embrace its potential with clarity and caution.

The Billion-Dollar Heist: How Bybit Survived the Largest Crypto Hack in History

On February 21, the cryptocurrency world was shaken when Bybit, one of the largest Bitcoin exchanges, fell victim to a staggering $1.5 billion hack – marking it as the biggest cyber heist in crypto history. Despite the massive breach, the platform continued operating, thanks in part to swift crisis management and the backing of industry heavyweights.

How the Hack Unfolded

On February 21, on-chain detective ZachXBT reported suspicious ETH outflows from Bybit. We are talking about 499,395 ETH (about $1.46 billion at the time). The assumptions about the hack were confirmed by the company’s CEO Ben Zhou, and his employees almost immediately published a statement according to which the incident occurred when transferring ETH from cold multisig storage to a hot wallet.

The attackers replaced the transaction signing interface so that all participants in the procedure saw the correct address. At the same time, the logic of the smart contract was changed, and the hackers gained control of the ETH wallet and withdrew all the funds.

Zhou hastened to reassure clients and emphasized that the platform remains solvent and continues to process withdrawal requests, albeit with a delay: within about 10 hours after the hack, the exchange recorded a record number of withdrawal requests – more than 350,000. At that time, about 2,100 requests remained pending, while 99.994% of transactions were completed.

Nevertheless, the platform’s CEO still asked partners to provide a loan in ETH – the funds were needed to cover liquidity during the crisis period. As a result, more than 10 companies supported the exchange.

Huobi co-founder Du Jun contributed 10,000 ETH and promised not to withdraw it for a month. The co-founders of Conflux and Mask Network also announced the deposit of Ether to the exchange’s cold wallets. Coinbase Head of Product Conor Grogan wrote that Binance and Bitget sent >50,000 ETH there too.

According to reporter Colin Wu, 12,652 stETH (around $33.75 million) were transferred from MEXC to Bybit’s cold wallet.

The ETH price responded to the Bybit hack by falling to $2,625 (Binance), but recovered fairly quickly. By the evening of February 23, the quotes momentarily exceeded $2,850, after which they corrected to $2,690 (as of February 24).

Bybit representatives said that information about the incident has been “reported to the relevant authorities.” In addition, cooperation with on-chain analytics providers has allowed them to identify and isolate the associated addresses, limiting the attackers’ ability to “withdraw ETH through legitimate markets.”

As of February 24, Bybit has fully restored its Ethereum reserves (~444,870 ETH).

Who Was Behind the Attack?

According to ZachXBT, unknown individuals quickly exchanged some of the stolen mETH and stETH tokens for ETH via decentralized exchanges. 10,000 ETH were divided between 36 wallets.

The founder of DeFi Llama, 0xngmi, noted that the methods in this attack are similar to the incident with the Indian exchange WazirX in July 2024. At that time, Elliptic analysts concluded that North Korean hackers were behind the attack.

0xngmi’s assumption was confirmed by Arkham Intelligence. According to them, on the day of the Bybit hack, ZachXBT investigator “provided irrefutable evidence of Lazarus Group’s involvement in the hack”:

Its analysis contains a detailed analysis of test transactions and associated wallets used before the attack, as well as a number of graphs and timestamps. This data has been transferred to the exchange team to assist with the investigation.”

The founder of the AML service BitOK and crypto investor Dmitry Machikhin noted that the stolen cryptocurrency is actively being withdrawn from the Ethereum network to other blockchains. According to his observations, immediately after the hack, the assets were distributed to 48 different addresses.

At the second stage:

  • crypto assets from these addresses were gradually split into even smaller parts (50 ETH each);
  • funds were sent through bridges (eXch and Chainflip) to other networks.

The image shows how one of the 48 addresses splits the transactions into 50 ETH and goes to Chainflip.

According to Taproot Wizards co-founder Eric Wall, the North Korean hackers are likely to convert all ERC-20 tokens to ETH, then exchange the resulting ETH for BTC, and then gradually transfer the bitcoins to yuan through Asian exchanges. In his opinion, the process could take years.

ZachXBT reported that Lazarus transferred 5,000 ETH to a new address and began laundering the funds through the centralized mixer eXch, and then transferred them to bitcoin through Chainflip. The latter said that they have recorded attempts by the attackers to withdraw the stolen funds from Bybit in bitcoin through their platform. They disabled some front-end services, but it is impossible to completely stop the protocol, given its decentralized structure with 150 nodes.

The mETH Protocol team reported that they blocked the withdrawal of 15,000 cmETH (~$43.5 million) and redirected the assets from the attacker’s address to a recovery account. Tether CEO Paolo Ardoino said that the company froze 181,000 USDT related to the attack.

In a comment to ForkLog, Bitget CEO Gracie Chen emphasized that “the exchange’s systems have already blacklisted the attackers’ wallets.”

As of February 23, the attackers had exchanged 37,900 ETH (about $106 million) for bitcoin and other assets through Chainflip, THORChain, LiFi, DLN, and eXch. The hackers’ address still had 461,491 ETH of the 499,395 ETH stolen.

What to do?

After the hack, some community members started talking about rolling back the state of the Ethereum network to return the stolen funds. Thus, former BitMEX CEO Arthur Hayes noted that as an investor with large ETH reserves, he would support the community’s decision in the event of a chain rollback to an earlier state – as after the hack of The DAO in 2016.

Bitcoin maximalist Samson Mow also spoke out in support of restoring the blockchain, but leading Ethereum developer Tim Beiko criticized the idea. According to him, the Bybit incident involved an incorrect presentation of transaction data in the hacked interface, and not technical problems.

In addition, after the hack, the funds quickly spread across the complex ecosystem of the second-largest cryptocurrency by capitalization. “Rolling back” the network would mean canceling many legitimate transactions, some of which are related to actions outside the Ethereum network. The Vice President of Yuga Labs, nicknamed Quit, also drew attention to this. He added that many ordinary users would lose money, and the accounting systems of large players like Circle and Tether would collapse.

What’s the bottom line

The Bybit hack turned out to be the largest in the crypto industry so far. However, the head of Bitget did not find any reason to panic: according to her, the losses are equivalent to Bybit’s annual profit ($1.5 billion), and clients’ funds are completely safe.

The incident did not affect market sentiment either. According to Glassnode, the implied volatility of the first cryptocurrency is close to record lows. Price fluctuations against the backdrop of the hacker attack decreased after Strategy founder Michael Saylor published a chart of the company’s coin purchases.

This time, there was no platform crash or market panic, and a quick response and community participation helped restore liquidity and partially block the stolen assets. However, the incident highlighted a persistent problem – even large centralized platforms are still susceptible to attacks and vulnerable to hackers.

Grok Names Elon Musk as the Main Disinformer

Elon Musk is the main disseminator of disinformation in X, according to the AI ​​assistant Grok from the entrepreneur’s startup xAI, integrated into his social network.

The billionaire has a huge audience and often spreads false information on various topics, the chatbot claims. Among other disinformers, according to the neural network: Donald Trump, Robert F. Kennedy Jr., Alex Jones and RT (Russian television).

Trump shares false claims about the election, Kennedy Jr. – about vaccines, and Alex Jones is known for spreading conspiracy theories. Russian television lies about political issues, Grok added.

Grok’s Top Disseminators of Disinformation. Data: X.

The chatbot cited Rolling Stone, The Guardian, NPR, and NewsGuard as sources of information.

The selection process involved analyzing multiple sources, including academic research, fact-checking organizations, and media reports, to identify those with significant influence and a history of spreading false or misleading information,” the AI ​​noted.

The criteria for compiling the rankings included the volume of false information spread, the number of followers, and mentions in credible reports.

When asked for clarification, Grok noted that the findings may be biased because the sources provided are mostly related to the funding or opinions of Democrats and liberals.

Recall that in January, artificial intelligence was used to spread fake news about the fires in Southern California.

A similar situation arose after Hurricane Helene.

Google Unveils Memory Feature for Gemini AI Chatbot

Google has launched a notable update to its Gemini AI chatbot, equipping it with the ability to remember details from previous conversations, a development experts are calling a major advancement.

In a blog post released on Thursday, Google detailed how this new capability allows Gemini to store information from earlier chats, provide summaries of past discussions, and craft responses tailored to what it has learned over time.

This upgrade eliminates the need for users to restate information they’ve already provided or sift through old messages to retrieve details. By drawing on prior interactions, Gemini can now deliver answers that are more relevant, cohesive, and enriched with additional context pulled from its memory. This results in smoother, more personalized exchanges that feel less fragmented and more like a continuous dialogue.

Rollout Plans and Broader Access
The memory feature is first being introduced to English-speaking users subscribed to Google One AI Premium, a $20 monthly plan offering enhanced AI tools. Google plans to extend this functionality to more languages in the near future and will soon bring it to business users via Google Workspace Business and Enterprise plans.

Tackling Privacy and User Control
While the ability to recall conversations offers convenience, it may raise eyebrows among those concerned about data privacy. To address this, Google has built in several options for users to oversee their chat data. Through the “My Activity” section in Gemini, individuals can view their stored conversations, remove specific entries, or decide how long data is kept. For those who prefer not to use the feature at all, it can be fully turned off, giving users complete authority over what the AI retains.

Google has also made it clear that it won’t use these stored chats to refine its AI models, putting to rest worries about data being repurposed.

The Race to Enhance AI Memory

Google isn’t alone in its efforts to boost chatbot memory. OpenAI’s Sam Altman has highlighted that better recall is a top demand from ChatGPT users. Over the last year, both companies have rolled out features letting their AIs remember things like a user’s favorite travel options, food preferences, or even their preferred tone of address. Until now, though, these memory tools have been fairly limited and didn’t automatically preserve entire conversation histories.

Gemini’s new recall ability marks a leap toward more fluid and insightful AI exchanges. By keeping track of past talks, it lets users pick up where they left off without losing the thread, proving especially handy for long-term tasks or recurring questions.

As this feature spreads to more users, Google underscores its commitment to transparency and control, ensuring people can easily manage, erase, or opt out of data retention altogether.

Sam Altman talks about the features of GPT-4.5 and GPT-5

OpenAI CEO Sam Altman shared the startup’s plans to release GPT-4.5 and GPT-5 models. The company aims to simplify its product offerings by making them more intuitive for users.

Altman acknowledged that the current product line has become too complex, and OpenAI is looking to change that.

We hate model selection as much as you do and want to get back to magical unified intelligence,” he wrote.

GPT-4.5, codenamed Orion, will be the startup’s last AI model without a “chain of reasoning” mechanism. The next step is to move toward more integrated solutions.

The company plans to combine the o and GPT series models, creating systems capable of:

  • using all available tools;
  • independently determining when deep thinking is needed and when an instant solution is enough;
  • adapting to a wide range of tasks.

GPT-5 integrates various technologies, including o3. Other innovations will include canvas capabilities (Canvas-mode), search, deep research (Deep Research) and much more.

Free GPT-5 subscribers will get unlimited access to the model’s tools on standard settings. Plus and Pro account holders will be able to use advanced features with a higher level of intelligence.

Regarding the release dates of GPT-4.5 and GPT-5, Altman wrote in the comments to the tweet about “weeks” and “months“, respectively.

According to Elon Musk, ChatGPT’s competitor, the Grok 3 chatbot, is in the final stages of development and will be released in one to two weeks. Reuters writes about this.

Grok 3 has very powerful reasoning capabilities, so in the tests we’ve done so far, Grok 3 outperforms all the models that we know of, so that’s a good sign,” the entrepreneur said during a speech at the World Summit of Governments in Dubai.

Recall that Altman turned down Musk and a group of investors’ bid to buy the non-profit that controls OpenAI for $97.4 billion. The startup’s CEO admitted that this was an attempt to “slow down” the competing project.

Managing Large-Scale AI Systems: Data Pipelines and API Security

Artificial Intelligence is revolutionizing many sectors across the globe, and with this, as an organization scales its AI work, it is important that the infrastructure that will anchor these systems also evolves simultaneously. Lying at the heart of this infrastructure are data pipelines and APIs, crucial in the efficient functionality and performance of AI systems.

However, as companies start to use AI across their operations, the data pipes and API security present the big challenge. Weak management of this component might lead to data leakage, operational inefficiency, or catastrophic failure.

In this article, we’ll explore the key considerations and strategies for managing data pipelines and API security, focusing on real-world challenges faced by organizations deploying large-scale AI systems.

Data Pipelines: Intrinsic Building Block of AI Systems

Fundamentally, a data pipeline defines the flow of information that comes from various sources through a series of steps, eventually feeding AI models, which rely on this input for the purposes of training and making inferences. Large AI systems, specifically those designed to solve complex problems related to natural language processing or real-time recommendation engines, rely heavily on good-quality and timely data. Due to this fact, efficient management of data pipelines is crucial to ensure the efficacy and accuracy of AI models.

Scalability and Performance Optimization: One of the major problems related to data pipelines is scalability. In a small-scale implementation of AI, a simple data ingestion process might work. However, when the system grows and more data sources are added, performance bottlenecks can crop up. Large-scale AI applications often require processing large amounts of data in real-time or near real-time.

Achieving this goal requires an infrastructure that would be able to accommodate such increasing demand without losing the efficiency of vital operations. Distributed systems like Apache Kafka, combined with cloud-based services such as Amazon S3, provide scalable solutions that can efficiently deal with data transmission.

Data Quality and Validation: Regardless of the design excellence of the artificial intelligence model, subpar data quality will result in erroneous predictions. Consequently, the management of data quality is an indispensable component of data pipeline administration. This process encompasses the elimination of duplicates, addressing absent values, and standardizing datasets to maintain consistency across various sources.

With tools such as Apache Beam and AWS Glue, one gets a platform for real-time data cleansing and transformation, which ensures that only the most accurate and relevant data flows to the AI model.

Automation, Surveillance, and Fault Management: Automation becomes a key requirement for extended AI environments where data continuously flows in from various sources. The establishment of automated data pipelines means less intervention from human personnel to manage the data; on the other hand, real-time monitoring allows an organization to catch errors before they can affect business operations. On this line, Datadog and Grafana-like platforms create real-time views around the status of data pipelines-when latency or data corruption occurs-and the necessary automation of error-handling processes.

API Security: Gateway to Artificial Intelligence Systems

Basically, APIs are bridges that connect various applications, services, and systems with an AI model. As such, they become part and parcel of the core of modern AI systems. Equally, APIs are among the greatest weaknesses in the chain of large-scale systems. The rise in AI has meant increased API endpoints being created, and each endpoint is a root for another breach, maybe even more serious, if not well guarded.

Authentication and Authorization: Basic but very crucial security measures for APIs include efficient authentication and authorization. Without proper authentication, APIs can become a gateway to ciphered information and functions hidden inside the AI system. OAuth 2.0 and API keys are just some of the strategies that offer flexible methods of securely accessing APIs. However, it is not enough to just apply these techniques; regular audits regarding API access logs need to be performed to ensure that the right users have the proper access level.

Rate Limiting and Throttling: Large-scale AI systems are very vulnerable to malicious actors attempting Distributed Denial-of-Service attacks. In such an attack, the API endpoints are overloaded with requests by the attackers until the system becomes crashed. Rate limiting and throttling mechanisms could prevent this by allowing only a limited number of requests from a user within a certain period of time.

This ensures that no single user or collective group of users can overwhelm the system, and hence keeps the system intact and available.

Encryption and Data Protection: The protection of data involves more than just the security of the AI models and databases but also the data when it flows through the system via APIs. Encrypting data at rest and in transit using SSL/TLS protocols, for example, ensures that even if an attacker manages to intercept the data, it will still be unreadable. Moreover, encryption, together with other data protection approaches, protects sensitive information from unauthorized access, such as personal data and financial records.

Anomaly Detection and Monitoring: In large AI ecosystems, it is impossible to manually monitor each and every API interaction for potential security breaches. It is here that AI can be a strong ally. State-of-the-art security solutions, such as Google’s Cloud Armor or machine-learning-powered anomaly detection algorithms, can monitor API traffic in real time to spot unusual activities or behavior that may indicate an attack.

This is done by leveraging AI in securing the API infrastructure to better defend the system against emerging threats.

Balancing Security and Performance

One of the biggest challenges that organizations face with the management of data pipelines and API security is having to balance these issues against considerations around performance. For instance, encrypting all data moving across a pipeline can dramatically increase security; in turn, this can degrade performance due to increased latency, which then diminishes the overall effectiveness of the system. Similarly, very stringent rate limiting can help protect the system from DDoS attacks but at the same time can prevent legitimate users from accessing it during high demand periods.

In a word, the key to it all is finding a balance that works for both security and performance. This requires tight collaboration between security experts, data engineers, and developers. A DevSecOps methodology would ensure that security is indeed woven into the fabric of every stage of the development and deployment lifecycle without sacrificing performance. And, further testing and incremental improvements are much essential for the perfect tuning of security versus scalability.

Conclusion

Accordingly, with the increasing scale and complexity of AI systems, managing data pipelines and securing APIs become fundamentally critical aspects. Any failure to address these aspects on the part of any organization may lead to data breach, overall system inefficiencies, and loss of reputation.

However, the usage of scalable data pipeline frameworks, API protection using high-level authentication, encryption, and monitoring, and maintaining a proper balance between security and performance, allows an organization to use the full potential of artificial intelligence by minimizing the probable risks to its systems. Building on appropriate strategies and using efficient tools can provide a seamless integration of data pipelines and API security oversight into an organization’s AI infrastructure, so reliability, efficiency, and security are ensured as systems scale.

Authored by Heng Chi, Software Engineer

How scam HR can run virus on your PC

Imagine getting an offer for your dream job, but handing over your computer to a hacker in the process.

This isn’t a plot from a cybersecurity thriller. It’s the reality of a growing threat in the digital recruitment space, where job scams have evolved from phishing emails to full-blown remote code execution attacks disguised as technical assessments. We invited Akim Mamedov, a CTO to share his experience and recommendations.

***

For quite some time there were rumors that a new type of scam emerged in hiring, especially in platforms like LinkedIn. I didn’t pay enough attention until I encountered this scam scheme personally.

The truth is that almost every scam relies on social engineering, e.g., to lure a person in performing some action without paying enough attention. This kind is similar, the desired outcome is running malicious code on the user ‘s computer. Now let’s dive deep in the details and explore how the schema works and how bad guys attempt to do their dirty business.

After surfing on LinkedIn I’ve received a message from the guy about an interesting job offer. He described the role in detail, promised a good salary and was actively asking for my attention.

Before switching to Telegram I checked the profile of the guy and it looked pretty decent – good working experience, extensive profile information, linked university and company where he supposedly works.

After proceeding to telegram we decided to schedule a call.

On the call, I had a chance to see him in person – it was an Indian guy with a long beard. I hadn’t opportunity to take screenshots because he immediately turned his camera off. This is when it started to look suspicious as hell so I’ve started making screenshots of everything.

He asked a couple of quick questions like tell me about a project and confirm that you’ve worked with this and with that. At the end of the call he said that there is still a small test task which I have to solve and then they will hire me.

That’s where the interesting part begins. I’ve opened the archive and started checking the code.

Meanwhile I’ve messaged a couple of questions to HR so he got the feeling that i’m aware about the malware and deleted messages in telegram and linkedin. Now let’s focus on what the code does.

From the first glance, it’s a simple javascript backend project.

But what @el3um4s/run-vbs and python-shell does inside this simple js test task?

After quick search of usages i’ve found a file where this package is actually used

There are 2 files – one for Windows OS and the other for any other OS with python installed. Let’s check one with python code.

Inside the file with python code we have a script which collects some computer information and sends it to the server. Response from that server could contain instructions which go directly to the exec() function thus executing arbitrary code in the system. This looks like a botnet script which keeps an endless connection to the attacker server and waits until the server responds to perform some actions. Needless to say that running this script means passing your system to an attacker thus allowing reading sensitive data, tinkering with OS services and utilizing computer resources.

This is the opinion of ChatGPT regarding the code in that file.

The impact of this scheme could possibly be big enough to infect thousands of computers. Sure there are a lot of arrogant developers who consider this test task too easy for spending more than a couple of minutes and will try to finish it fast. Junior developers are at risk too – lured with high salaries and non-demanding job descriptions, they will run the project without properly understanding it.

In conclusion, be mindful of the code you’re trying to run, always check any source code and script you’re running.


Unleashing Powerful Analytics: Harnessing Cassandra with Spark

Authored by Abhinav Jain, Senior Software Engineer

The adoption of Apache Cassandra and Apache Spark is a game-changer for organizations seeking to change their analytics capabilities in the modern world driven by data. With its decentralized architecture, Apache Cassandra is highly effective in dealing with huge amounts of data while ensuring low downtime. This occurs across different data centers which can be said as well for both fault tolerance and linear scalability: the reason why more than 1,500 companies — such as Netflix and Apple — deploy Cassandra. On the other hand, Apache Spark further boosts this system by processing data in memory, allowing speeds up to 100 times faster than disk-based systems and greatly enhancing the setup introduced by Cassandra.

A fusion of Cassandra and Spark results in not just a speedup, but an improvement of data analytics quality. The organizations that use this report drastically decrease their data processing time from hours to minutes — vital for finding insights quickly. This has brought them closer to staying ahead in the competitive markets since the two technologies work well together: When used jointly, Spark and Cassandra are best suited for real-time trend analysis.

On top of that, the integration of these two technologies is proposed as a response to the growing demand for flexible and scalable solutions in areas as broad as finance, where integrity, validity and speed play an important role. This coaction helps organizations not only control larger sets more expediently but also find valuable intelligence with a pragmatic approach: the decision is made based on their operation or the strategic move of their business. Given this, it becomes evident that knowledge about Cassandra’s integration with Spark should be part of every organization that intends to improve its operational analytical data.

Preface: Combining Cassandra’s Distribution with Spark’s In-Memory Processing

The use of Apache Cassandra has been a common choice for organizations that have large volumes of data to manage since they need distributed storage and handling capabilities. However, its decentralized architecture and tunable consistency levels — along with the ability to distribute large amounts of data across multiple nodes — is what makes it ideal without introducing minimal delays. In contrast, Apache Spark can work out processing and analyzing data in memory, which complements Cassandra as an outstanding partner able to deliver real-time analytics plus batch processing tasks.

Setting Up the Environment

To optimally prepare the environment for analytics using Cassandra and Spark, you start the process by installing Apache Cassandra first, then launching a Spark cluster. Both components need individual attention during configuration to promote harmony and achieve the best output from each side. The inclusion of connectors like DataStax Spark Cassandra Connector or Apache Spark Cassandra Connector is pivotal, since they help in effective data flow between Spark and Cassandra systems. Such connectors enhance query operation through Spark’s easy access to data from Cassandra without much network overhead due to parallelism optimization.

With the connectors having been configured, it’s equally vital that you tinker with the settings in a bespoke manner to cater to the workload specifics and volume of data. This could entail tweaking Cassandra’s compaction strategies and Spark’s memory management configurations — adjustments that must be made with anticipation of the incoming data load. The last leg of this journey is verifying the setup through test data: the successful integration signals workability, enabling a seamless analytics operation with due expectations. This setup — robust and intricate — acts as a fulcrum for both technologies, allowing them to be used at full capacity in one coherent analytics environment.

Performing Analytics with Spark and Cassandra

A fusion of Spark with Cassandra results in an enhancement of data processing: through the utilization of Spark’s efficient distribution model and Cassandra’s powerful computing capabilities. The end users are therefore able to perform advanced queries and deal with large datasets easily using Cassandara’s direct storage framework. In addition, these capabilities are enhanced by a number of libraries embedded within Spark, such as MLlib for machine learning, GraphX for graph processing, and Spark SQL for structured data handling — tools that support easy execution of complex transformations, and predictive analytics and data aggregation tasks. Furthermore, by caching data in memory, Spark speeds up iterative algorithms and queries, thus making it ideal where frequent data access is needed, coupled with manipulation via an intuitive user interface. The integration improves workflow and maintains high performance even after scaling to meet growing demands on big data across landscapes where large amounts prevail.

Real-time Analytics and Stream Processing

Furthermore, Spark plus Cassandra real-time analytics is a good approach to organizations’ intake and immediate analysis of data flows. This value is especially important for the business where speed and informativity are important. For example, monitoring of financial transactions, social network activity or IoT output information. Through Spark Streaming, data can be ingested in micro-batches and processed continuously with the possibility of implementing complex algorithms on the fly. When Spark is used with the CDC feature from Cassandra or tightly integrated with Apache Kafka as part of message queuing infrastructure, it turns into a powerful weapon that allows development teams to craft feedback-driven analytical solutions supporting dynamic decision processes which adapt towards changes unearthed from incoming data streams.

Machine Learning and Advanced Analytics

In addition to traditional analytics tasks, Spark opens up possibilities for advanced analytics and machine learning with Cassandra data. Users can create and model machine learning from Cassandra-stored data without having to move or duplicate it, hence enabling predictive analytics and anomaly detection as well as other high-end use cases through the adoption of Spark’s MLlib plus ML packages.

Best Practices and Considerations

One must take into account the best practices when integrating Spark and Cassandra for advanced analytics so that their potential can be maximized effectively. To ensure this, it is important to modify the data model of Cassandra in a way that meets the query patterns, helping reduce read and write latencies. In addition, when using partition keys design, distribute data equally across nodes to prevent hotspots while also configuring Spark’s memory and core settings appropriately. This will help you avoid resource overcommitment and thus any unnecessary performance issues.

Moreover, monitoring of both Spark and Cassandra clusters should be maintained continuously. Make use of tools such as Apache Spark’s web UI and Cassandra’s nodetool that can help you with performance metrics which would lead to bottlenecks showing up in no time. You must put in place strict data governance policies; this involves carrying out regular audits and compliance checks, which would ensure data integrity and security. Ensure secure access to data using authentication plus encryption (both in transit and at rest) that prevents unauthorized access and breaches.

Conclusion

Combining Apache Cassandra and Apache Spark creates a significant platform for large-scale analytics: it helps organizations get valuable and meaningful data much quicker than they ever did. By taking advantage of what each technology does best, companies have the opportunity to stay ahead of the competition, foster innovation, and ensure their decisions are based on quality data. Be it historical data analysis, streaming data processing as it flows or constructing machine learning frameworks, Cassandra and Spark, when brought together, form an adaptable and expandable solution for all your analytical needs. 

CIA and MI6 chiefs reveal the role of AI in intelligence work

The heads of the CIA and MI6 revealed how artificial intelligence is changing the work of intelligence agencies, helping to combat disinformation and analyze data. AI technologies are playing a major role in modern conflicts and helping intelligence agencies adapt to new challenges.

The heads of the CIA Bill Burns and MI6 Richard Moore, during a recent joint appearance, described how artificial intelligence is transforming the work of their intelligence agencies.

According to them, the main task of the agencies today is “adapting to modern challenges”. And it is AI that is central to this adaptation.

New challenges in modern conflicts

The heads of intelligence noted the key role of technology in the conflict in Ukraine.

For the first time, combat operations combined modern advances in AI, open data, drones and satellite reconnaissance with classical methods of warfare.

This experience confirmed the need not only to adapt to new conditions, but also to experiment with technology.

Combating disinformation and global challenges

Both intelligence agencies actively use AI to “analyze data, identify key information, and combat disinformation.”

Experts named China as one of the notable threats. In this regard, the CIA and MI6 have reorganized their services to work more effectively in this area.

AI — a tool for defense and attack

Artificial intelligence helps intelligence agencies not only analyze data, but also protect their operations by creating “red teams” to check for vulnerabilities.

The use of cloud technologies and partnerships with the private sector make it possible to unleash the full potential of AI.

Swift 6 Unveiled: Key Updates and a Comparison With C++

With the official announcement of Swift 6, Apple has released major updates to its programming language. In this post, we will review the latest features, discuss the pros and cons, and also compare Swift 6 to C++.

Initially intended for developing applications on iOS and macOS, Swift has transformed into a multi-paradigm language, retaining its emphasis on application development. It is one of the most popular languages used in the development of mobile and desktop applications, games, and even backend services. 

Let’s look into the new features of Swift 6 released in June 2024 and how it improves development as well as a comparison to C++.

Swift’s Timeline: How the 6th Version came to Light

Apple has a reputation for releasing products that are beautiful and user obsessed, and their programming language Swift aligns with that sentiment. Swift was made publicly available with iOS 8 in 2014, but previous developments can be dated back to the 90’s when algorithms were crafted for the NeXT system, which served as a precursor for what we nowadays know as macOS and iOS.

Unlike a majority of programming languages which have one single forebear, Swift seems to draw influence from a multitude of languages, including but not limited to, C++, Python, and even obsoleted programming languages like Haskell and Objective C. Swift is said to have been created over the span of several years by Apple engineers,TURNING It into a multi functional, high performing language in Turn.

When initially released, Swift had a rich 500 page accompanying document elaborating its tools. Soon after, Swift 2.0 was released with heightened performance features alongside improved error management in 2015. The same retouching was prevalent throughout the introduction of Swift 3.0 in 2016, where the deprecation of older syntactic constructs was accompanied by more major outline changes. 

Swift became even more polished four years later with the introduction of version 4.0, increasing stability and resilience of APIs. Then came 2019, with the release of Swift 5 which brought the long awaited ABI stability, making long term expansion planning highly accessible with newfound support frameworks. Then, on June 11, 2024 Apple released Swift 6, boasting the most advanced features and optimizations to date.

Swift 6: Features, Benefits, and Future Outlook

Swift targets developers who build applications on Apple ecosystems as it prioritizes easy adoption, rapid development cycles, and enhanced performance. It has gone through several changes since inception and seeks to address the pitfalls of existing programming languages. 

Objective-C’s successor, Swift, clearly has an advantage through its straightforward and compact form which has made code easier to understand and written with much more efficiency. Readability has been enhanced a lot due to rapid development in code.

Speed optimization is another element of development in Swift programming—for a presumed simple language, Swift is extremely fast. It benchmarks saying it is three times faster than Objective-C and up to eight times more efficient than Python. With the introduction of Swift 6, Apple aims to surpass those limits and now compete directly with C++ for speed of execution.

Key Advantages:

  • Modern Libraries and Frameworks – Swift provides an extensive ecosystem of pre-built libraries and frameworks, enabling developers to write efficient, high-quality code with minimal effort.
  • Open-Source Community – Unlike many Apple technologies, Swift is open source, allowing developers worldwide to contribute, fix bugs, and introduce new features—ensuring continuous improvement.
  • Enhanced Security – Swift improves memory management and reduces vulnerabilities, making applications more resistant to unauthorized data access and potential exploits.
  • Reduced Critical Errors – Swift’s improved scripting capabilities minimize the risk of runtime crashes and critical failures, resulting in more stable software.
  • Live Coding and Visualization – Developers can instantly preview code execution in a sandbox environment, helping them quickly identify and fix errors, thereby speeding up the development process.
  • Dynamic Libraries – Swift supports dynamic linking, allowing developers to push updates and bug fixes without requiring a full OS update.

Swift’s Limitations and Challenges

Despite its strengths, Swift is not without its limitations. It remains a specialized language primarily designed for iOS and macOS development. While there were discussions about adapting Swift for Android, the project has not materialized.

Moreover, combining Swift with older Objective C applications can be difficult. Although Apple has a bridging method, it usually creates issues with software compilation and maintenance. Fortunately, these integration problems have been eased with the changes made in Swift 6.  

Prospective use for Swift looks good in the long run. There is an increasing need for proficient Swift developers as the macOS and iOS ecosystems expand. Considering Apple’s continuous investment in the language, Swift is not likely to be terminated in the coming ten years. Its open-source framework guarantees that the community will take active responsibility in its growth, causing innovations to surge.  

Swift facing the issue of not being able to support multiple platforms makes its adoption limited to the Apple ecosystem. While this ensures advanced optimization for macOS and iOS applications, it restricts Swift’s usability beyond Apple products.  

Still, within the Apple ecosystem, Swift provides unmatched performance, security, usability, and versatility, making it a resilient choice for developers.

What’s New in Swift 6: A Comprehensive Overview

The last five years of Swift’s development is encapsulated in its sixth version which has been marked as the most important change. The update comes with numerous improvements to the overall functionality and user experience. This release, as specialists have noted, contains important new features related to the usability of the code, protection of information, parallel processing, along with other system resource management features for more constrained environments. There is still room for improvement down the line, but for now Swift has set itself up nicely with further innovations planned for the coming years.

Key Features of Swift 6

Full Parallelism Support

Parallelism has been a major focus in Swift 6, especially when it comes to executing multiple tasks concurrently. One of the standout features of this release is the introduction of full parallelism checking, which is now enabled by default. This change, a step up from the optional checking in previous versions, ensures better performance in multi-threaded applications and reduces the likelihood of errors when handling concurrent tasks.

Swift 6 also improves the accuracy of parallelism checking by eliminating false positives, particularly in the case of data races that were common in version 5.10. A key change here is SE-0414, which introduces the concept of sendability. This feature enables the compiler to verify that certain parts of the code can safely run concurrently, based on the types of data that can be sent across threads. The result is a more intuitive and efficient approach to parallelism, making it easier for developers to work with concurrent tasks.

Non-Copyable Data Types

Swift has traditionally allowed all value types and reference types to be copied. With Swift 6, however, non-copyable data types are now a reality. This feature is particularly useful in scenarios where data needs to be treated as unique resources. By preventing unnecessary copies, Swift 6 minimizes resource leaks and enhances coding convenience, making it easier to manage and handle unique resources in a program.

Seamless Integration with C++

One of the most exciting new features in Swift 6 is its ability to seamlessly interact with C++. Developers can now integrate C++ virtual methods, arguments, and standard library elements directly into Swift, eliminating the need for time-consuming transitions between languages. This deepened interoperability allows for smoother integration of C++ elements into Swift projects, ensuring a more fluid developer experience.

Additionally, Swift 6 improves the security of data by minimizing vulnerabilities that could occur when integrating with C++. This makes Swift 6 an even more attractive choice for developers working in cross-platform environments.

Enhanced Error Handling

Swift 6 also introduces one of the most robust error handling mechanisms in the industry. Bugs are automatically detected and addressed during the early stages of program development, saving time and reducing the likelihood of errors going unnoticed until later in the process. This feature helps developers create more reliable code by identifying potential issues proactively.

Function Replacement and Specialized Capabilities

Earlier versions of Swift introduced certain functions that were kept hidden or in an experimental phase. In Swift 6, these features have been fully realized, offering specialized capabilities related to variables and data management in parallel environments. These additions enhance the flexibility and power of the language, enabling developers to better manipulate data in complex, multi-threaded scenarios.

Package Iteration

Swift 6 introduces a feature that allows for iteration by packages, enabling developers to bypass parameters introduced in earlier versions. This functionality is particularly useful when dealing with tuples (ordered sets of data) and allows for a more streamlined and efficient coding process. By simplifying iteration, Swift 6 makes it easier to compare and manipulate data in complex applications.

How Swift 6 Changes the Development Landscape

The Swift 6 update has been widely hailed by the developer community as a positive and necessary evolution of the language. One of the most notable improvements is the language’s enhanced focus on parallelism, which now allows developers to write code that can handle concurrency more easily and efficiently. Features like automatic parallelism checking and the refined sendability concept significantly reduce the cognitive load required to work with concurrent tasks, speeding up the development process.

In addition to parallelism, the update places a strong emphasis on data safety, particularly in the context of data races—a common issue in multi-threaded programming. Swift 6’s automatic data isolation and protection mechanisms provide built-in safeguards that minimize the risk of data corruption or race conditions. This means developers can write secure, efficient parallel code with confidence, making it easier to build robust applications that scale.

Ultimately, Swift 6 enhances the language’s capabilities, transforming it into a more powerful, secure, and efficient tool suitable for developing a wide range of applications. Whether building mobile apps, desktop software, server-side programs, or complex system applications, Swift 6 equips developers with the tools they need to create high-quality, high-performance solutions.

Comparing Swift 6 and C++: A Comprehensive Analysis

With the release of Swift 6, Apple proudly asserts that the language has become not only faster but also safer, potentially surpassing C++ in terms of efficiency. Originally designed as a potential alternative to C++, Swift has steadily evolved to address the vulnerabilities that were inherent in C++.

In this comparison, we’ll examine key aspects of both Swift 6 and C++ to evaluate how they measure up in different parameters:

1. Speed

True to its name, Swift is designed for speed. From its inception, the language was built to outperform other programming languages in execution speed. Swift 6 has surpassed the latest version of Python in terms of speed and, in some cases, even outpaces C++ in specific algorithms. This makes Swift an appealing choice for applications that demand high performance.

2. Performance

The improved speed of Swift 6 translates directly into better performance. Code execution in Swift is faster, and this speed boost contributes to more efficient application performance without overloading device resources, as often happens with C++ and other languages. Swift 6 ensures optimal performance while keeping resource consumption at manageable levels, making it ideal for applications running on mobile or desktop platforms.

3. Code Simplicity

Swift’s clean and simple syntax is one of its most compelling features. Designed with developer productivity in mind, Swift’s syntax is intuitive and free of unnecessary complexity, unlike C++’s often convoluted structure. Swift 6 improves on this foundation, removing even more cumbersome elements to make the language feel closer to a natural language. This approach reduces the likelihood of non-obvious errors and makes working with the language more accessible. In contrast, C++ still tends to be more difficult to learn, requiring a deep understanding of complex concepts like pointers and manual memory management.

4. Memory Management

Swift 6 introduces Automatic Reference Counting (ARC), an innovative memory management system that automatically tracks and cleans up unused resources. This feature eliminates the need for developers to manually manage memory allocation and deallocation, significantly reducing the risk of memory leaks. C++ requires more manual intervention in this area, relying on developers to handle memory management themselves, which can lead to errors and inefficiencies.

5. Security

Security is another area where Swift 6 stands out. The language has built-in features that minimize the risk of unauthorized access to sensitive data. Additionally, Swift 6 excels at detecting developer bugs early in the development process, reducing the chances of subtle, critical errors slipping through the cracks. Unlike C++, which can be prone to vulnerabilities such as buffer overflows and pointer errors, Swift 6’s more predictable behavior makes testing and debugging easier, ensuring a safer development environment.

6. Open Source and Community Support

Apple’s decision to make Swift an open-source language has significantly contributed to its growth and adaptability. Swift can now be used by anyone—from seasoned professionals to self-taught developers. The open-source nature also allows Swift to be ported to third-party systems, expanding its utility and supporting the creation of new libraries that further enhance its capabilities.

While Apple has traditionally been known for its closed development ecosystem, this strategic move has allowed the Swift community to flourish, offering contributions that help improve the language. Swift’s official resources are readily accessible, with Apple providing comprehensive tutorials and an integrated development environment (IDE) for macOS users. Swift Playgrounds also allows developers to experiment with code and test applications in real-time, further simplifying the learning curve.

Areas Where C++ Still Holds an Edge

Despite Swift 6’s many advantages, C++ is not without its strengths. Swift’s biggest drawback remains its narrow specialization—the language is primarily designed for developing applications on Apple platforms. While Swift applications can technically run on Windows and Linux, the process is cumbersome, making Swift unsuitable for cross-platform development. This is where C++, a universal language, excels. C++ can run on virtually any platform, making it a more practical choice for applications that need to operate across multiple operating systems.

Additionally, Swift’s relatively small Russian-speaking community may limit its appeal in certain regions, though this is a minor consideration compared to the vast global community supporting C++. The size of the community directly impacts the availability of resources, tools, and peer support, which is crucial for language adoption and innovation.

Another area where C++ maintains a slight advantage is its tight integration with Objective-C. While Swift 6 offers seamless integration with Objective-C, this requires developers to be proficient in both languages. Beginners who are just starting to learn iOS development must often master both languages to effectively work with existing Apple applications.

Swift 6 vs. C++

For Apple platform development, Swift 6 clearly emerges as the preferred language. Its speed, performance, simplicity, security features, and robust community support make it the ideal choice for developers creating apps for iOS, macOS, and beyond. However, due to Swift’s specialized focus, it is not as versatile as C++, which continues to be the dominant language for cross-platform and system-level development.

While Swift 6 offers impressive advancements and enhancements, C++ remains the more functional and practical choice for projects that require broad compatibility and system-level control. For developers targeting Apple devices, Swift 6 is undoubtedly the future; for those needing flexibility across multiple platforms, C++ retains its relevance