From Bugs to Brilliance: How to Leverage AI to Left-Shift Quality in Software Development

Contributed by Gunjan Agarwal, Software Engineering Manager at Meta
Key Points
  • Research suggests AI can significantly enhance left-shifting quality in software development by detecting bugs early, reducing costs, and improving code quality.
  • AI tools like CodeRabbit and Diffblue Cover have proven effective in automating code reviews and unit testing, significantly improving speed and accuracy in software development.
  • The evidence leans toward early bug detection, saving costs, with studies showing fixing bugs in production can cost 30-60 times more than early stages.
  • An unexpected detail is that AI-driven CI/CD tools, like Harness, can reduce deployment failures by up to 70%, enhancing release efficiency.

Introduction to Left-Shifting Quality

Left-shifting quality in software development involves integrating quality assurance (QA) activities, such as testing, code review, and vulnerability detection, earlier in the software development lifecycle (SDLC). Traditionally, these tasks were deferred to the testing or deployment phases, often leading to higher costs and delays due to late bug detection. By moving QA tasks to the design, coding, and initial testing phases, teams can identify and resolve issues proactively, preventing them from escalating into costly problems. For example, catching a bug during the design phase might cost a fraction of what it would cost to fix in production, as evidenced by a study by the National Institute of Standards and Technology (NIST), which found that resolving defects in production can cost 30 to 60 times more, especially for security defects.

The integration of artificial intelligence (AI) into this process has been able to left-shifting quality, offering automated, intelligent solutions that enhance efficiency and accuracy. AI tools can analyze code, predict failures, and automate testing, enabling teams to deliver high-quality software faster and more cost-effectively. This article explores the concept, benefits, and specific AI-powered techniques, supported by case studies and quantitative data, to provide a comprehensive understanding of how AI is transforming software development.

What is Left-Shifting Quality in Software Development?

Left-shifting quality refers to the practice of integrating quality assurance (QA) processes earlier in the software development life cycle (SDLC), encompassing stages like design, coding, and initial testing, rather than postponing them until the later testing or deployment phases. This approach aligns with agile and DevOps methodologies, which emphasize continuous integration and delivery (CI/CD). By conducting tests early, teams can identify and address bugs and issues before they become entrenched in the codebase, thereby minimizing the need for extensive rework in subsequent stages.​

The financial implications of detecting defects at various stages of development are significant. For example, IBM’s Systems Sciences Institute reported that fixing a bug discovered during implementation costs approximately six times more than addressing it during the design phase. Moreover, errors found after product release can be four to five times more expensive to fix than those identified during design, and up to 100 times more costly than errors detected during the maintenance phase. ​

This substantial increase in cost underscores the critical importance of early detection. Artificial intelligence (AI) facilitates this proactive approach through automation and predictive analytics, enabling teams to identify potential issues swiftly and accurately, thereby enhancing overall software quality and reducing development costs.​

Benefits of Left-Shifting with AI

The benefits of left-shifting quality are significant, particularly when enhanced by AI, and are supported by quantitative data:

  • Early Bug Detection: Research consistently shows that addressing bugs early in the development process is significantly less costly than fixing them post-production. For instance, a 2022 report by the Consortium for Information & Software Quality (CISQ) found that software quality issues cost the U.S. economy an estimated $2.41 trillion, highlighting the immense financial impact of unresolved software defects. AI tools, by automating detection, can significantly reduce these costs.​
  • Faster Development Cycles: Identifying issues early allows developers to make quick corrections, speeding up release cycles. For example, AI-driven CI/CD tools like Harness have been shown to reduce deployment time by 50%, enabling faster iterations Harness Case Study.
  • Improved Code Quality: Regular quality checks at each stage, facilitated by AI, reinforce best practices and promote a culture of quality. Tools like CodeRabbit reduce code review time, improving developer productivity and code standards.​
  • Cost Savings: The financial implications of software bugs are profound. For instance, in July 2024, a faulty software update from cybersecurity firm CrowdStrike led to a global outage, causing Delta Air Lines to cancel 7,000 flights over five days, affecting 1.3 million customers, and resulting in losses exceeding $500 million. AI-driven early detection and remediation can help prevent such costly incidents.​
  • Qualitative Improvements:Developer Well-being: AI tools like GitHub Copilot have shown potential to support developer well-being by improving productivity and reducing repetitive tasks – benefits that some studies link to increased job satisfaction. However, evidence on this front remains mixed. Other research points to potential downsides, such as increased cognitive load when debugging AI-generated code, concerns over long-term skill degradation, and even heightened frustration among developers. These conflicting findings highlight the need for more comprehensive, long-term studies on AI’s true impact on developer experience.

Incorporating AI into software development processes offers significant advantages, but it’s crucial to balance these with an awareness of the potential challenges to fully realize its benefits.

AI-Powered Left-Shifting Techniques

AI offers a suite of techniques that enhance left-shifting quality, each addressing specific aspects of the SDLC. Below, we detail six key methods, supported by examples and data, explaining their internal workings, the challenges they face, and their impact on reducing cognitive load for developers.

1. Intelligent Code Review and Quality Analysis

Intelligent code review tools use AI to analyze code for quality, readability, and adherence to best practices, detecting issues like bugs, security vulnerabilities, and inefficiencies. Tools like CodeRabbit employ large language models (LLMs), such as GPT-4, to understand and analyze code changes in pull requests (PRs). Internally, CodeRabbit’s AI architecture is designed for context-aware analysis, integrating with static analysis tools like Semgrep for security checks and ESLint for style enforcement. The tool learns from team practices over time, adapting its recommendations to align with specific coding standards and preferences.

Challenges: A significant challenge is the potential for AI to misinterpret non-trivial business logic due to its lack of domain-specific knowledge. For instance, while CodeRabbit can detect syntax errors or common vulnerabilities, it may struggle with complex business rules or edge cases that require human understanding. Additionally, integrating such tools into existing workflows may require initial setup and adjustment, though CodeRabbit claims instant setup with no complex configuration.

Impact: By automating code reviews, tools like CodeRabbit reduce manual review time by up to 50%, allowing developers to focus on higher-level tasks. This not only saves time but also reduces cognitive load, as developers no longer need to manually scan through large PRs. A GitLab survey highlighted that manual code reviews are a top cause of developer burnout due to delays and inconsistent feedback. AI tools mitigate this by providing consistent, actionable feedback, improving productivity and reducing mental strain.

Case Study: At KeyValue Software Systems, implementing CodeRabbit reduced code review time by 90% for their Golang and Python projects, allowing developers to focus on feature development rather than repetitive review tasks.

2. Automated Unit Test Generation

Unit testing ensures that individual code components function correctly, but writing these tests manually can be time-consuming. AI tools automate this process by generating comprehensive test suites. Diffblue Cover, for example, uses reinforcement learning to create unit tests for Java code. Internally, Diffblue’s reinforcement learning agents interact with the code, learning to write tests that maximize coverage and reflect every behavior of methods. These agents are trained to understand method functionality and generate tests autonomously, even for complex scenarios.

Challenges: Handling large, complex codebases with numerous dependencies remains a challenge. Additionally, ensuring that generated tests are meaningful and not just covering trivial cases requires sophisticated algorithms. For instance, Diffblue Cover must balance test coverage with test relevance to avoid generating unnecessary or redundant tests.

Impact: Automated test generation saves developers significant time – Diffblue Cover claims to generate tests 250x faster than manual methods, increasing code coverage by 20%. This allows developers to focus on writing new code or fixing bugs rather than repetitive testing tasks. By reducing the need for manual test writing, these tools lower cognitive load, as developers can rely on AI to handle the tedious aspects of testing. A Diffblue case study showed a 90% reduction in test writing time, enabling teams to focus on higher-value tasks.

Case Study: A financial services firm using Diffblue Cover reported a 30% increase in test coverage and a 50% reduction in regression bugs within six months, significantly reducing the mental burden on developers during code changes.

3. Behavioral Testing and Automated UI Testing

Behavioral testing ensures software behaves as expected, while UI testing verifies functionality and appearance across devices and browsers. AI automates these processes, enhancing scalability and efficiency. Applitools, for instance, uses Visual AI to detect visual regressions by comparing screenshots of the UI with predefined baselines. Internally, Applitools captures screenshots and uses AI to analyze visual differences, identifying issues like layout shifts or color inconsistencies. It can handle dynamic content and supports cross-browser and cross-device testing.

Challenges: One challenge is handling dynamic UI elements that change based on user interactions or data. Ensuring that the AI correctly identifies meaningful visual differences while ignoring irrelevant ones, such as anti-aliasing or minor layout shifts, is crucial. Additionally, maintaining accurate baselines as the UI evolves can be resource-intensive.

Impact: Automated UI testing reduces manual testing effort by up to 50%, allowing QA teams to test more scenarios in less time. This leads to faster release cycles and reduces cognitive load on developers, as they can rely on automated tests to catch visual regressions.

Case Study: An e-commerce platform using Applitools reported a noticeable reduction in UI-related bugs post-release, as developers could confidently make UI changes without fear of introducing visual regressions.

4. Continuous Integration and Continuous Deployment (CI/CD) Automation

CI/CD pipelines automate the build, test, and deployment processes. AI enhances these pipelines by predicting failures and optimizing workflows. Harness, for example, uses AI to predict deployment failures based on historical data. Internally, Harness collects logs, metrics, and outcomes from previous deployments to train machine learning models that analyze patterns and predict potential issues. These models can identify risky deployments before they reach production.

Challenges: Ensuring access to high-quality labeled data is essential, as deployments can be complex with multiple failure modes. Additionally, models must be updated regularly to account for changes in the codebase and environments. False positives or missed critical issues can undermine trust in the system.

Impact: By predicting deployment failures, Harness reduces deployment failures by up to 70%, saving time and resources. This reduces cognitive load on DevOps teams, as they no longer need to constantly monitor deployments and react to failures. Automated CI/CD pipelines also enable faster feedback loops, allowing developers to iterate more rapidly.

Case Study: A tech startup using Harness reported a 50% reduction in deployment-related incidents and a 30% increase in deployment frequency, as AI-driven predictions prevented problematic releases.

5. Intelligent Bug Tracking and Prioritization

Bug tracking is critical, but manual prioritization can be inefficient. AI automates detection and prioritization, enhancing resolution speed. Bugasura, for instance, uses AI to classify and prioritize bugs based on severity and impact. Internally, Bugasura likely employs machine learning models trained on historical bug data to classify new bugs and assign priorities. It may also use natural language processing to extract relevant information from bug reports.

Challenges: Accurately classifying bugs, especially in complex systems with multiple causes or symptoms, is a significant challenge. Avoiding false positives and ensuring critical issues are not overlooked is crucial. Additionally, integrating with existing project management tools can introduce compatibility issues.

Impact: Intelligent bug tracking reduces the time spent on manual triage by up to 40%, allowing developers to focus on fixing the most critical issues first. This leads to faster resolution times and improved software quality. By automating prioritization, these tools reduce cognitive load, as developers no longer need to manually sort through bug reports.

Case Study: A SaaS company using Bugasura reduced their bug resolution time by 30% and improved customer satisfaction scores by 15%, as critical bugs were addressed more quickly.

6. Dependency Management and Vulnerability Detection

Managing dependencies and detecting vulnerabilities early is crucial for security. AI tools scan for risks and outdated dependencies without deploying agents. Wiz, for example, uses AI to analyze cloud environments for vulnerabilities. Internally, Wiz collects data from various cloud services (e.g., AWS, Azure, GCP) and uses machine learning models to identify misconfigurations, outdated software, and other security weaknesses. It analyzes relationships between components to uncover potential attack paths.

Challenges: Keeping up with the rapidly evolving cloud environments and constant updates to cloud services is a major challenge. Minimizing false positives while ensuring all critical vulnerabilities are detected is also important. Additionally, ensuring compliance with security standards across diverse environments can be complex.

Impact: Automated vulnerability detection reduces manual scanning efforts, allowing security teams to focus on remediation. By providing prioritized lists of vulnerabilities, these tools help manage workload effectively, reducing cognitive load. Wiz claims to reduce vulnerability identification time by 30%, enhancing overall security posture.

Case Study: A fintech firm using Wiz identified and patched 50% more critical vulnerabilities in their cloud environment compared to traditional methods, reducing their risk exposure significantly.

Conclusion

Left-shifting quality, enhanced by AI, is a critical strategy for modern software development, reducing costs, improving quality, and accelerating delivery. AI-powered tools automate and optimize QA processes, from code review to vulnerability detection, enabling teams to catch issues early and deliver brilliance. As AI continues to evolve, with trends like generative AI for test generation and predictive analytics, the future promises even greater efficiency. Organizations adopting these techniques can transform their development processes, achieving both speed and excellence.

Optimizing Android for Scale: Storage Strategies for Modern Mobile Ecosystems

Contributed by Parth Menon, Software Engineer

Many of us in today’s age are familiar with the term Android. The latter has been among the most adopted mobile technologies in the world, powering billions of devices across the globe. As it scales, the need for mobile storage management efficiently has never been more important. Applications are becoming increasingly complex and store large media files, intricate data sets, and an increasing number of assets. Consequently, the performance and user experience of these apps have become vital challenges to address. What’s more, modern applications are no longer built by a single team. In fact, some of the world’s largest apps, like Facebook, Instagram, Deliveroo, and Google, are developed by multiple teams and organizations spread across different countries, time zones, and continents. This vast, global collaboration adds further layers of complexity to both app development and storage management. This article will delve into storage strategies that support scalability, enhance user experience, and optimize app performance while navigating the challenges of such widespread teamwork. 

The Increasingly Important World of Efficient Storage in Mobile Ecosystems

Starting with mobile storage, it is the backbone of performance in Android devices, from app load times to user interactions with content. Unlike desktops or laptops,where storage is scalable and users can easily upgrade their storage capacity, mobile devices are limited by the storage they come with. This means that once you buy a mobile device, you’re stuck with its storage capacity, making it even more important to optimize how an app manages its data. Additionally,  users interact with mobile devices at a faster pace, frequently switching between apps, which demands that apps load quickly and respond instantly. Basically a well-optimized storage system ensures that apps run efficiently while still offering rich user experiences.

Why It Matters:

User Expectations: First reason is that users on mobile expect the app to be quick and responsive. When applications consume a lot of storage or take longer to load due to poor data management, this results in user frustration. As a matter of fact, a recent report from UXCam indicates that 90% of users have stopped using an app due to poor performance, and 88% will abandon an app if it consistently experiences glitches or technical bugs. Additionally, 21% of mobile apps have been used only once, underscoring the necessity for apps to deliver immediate value and seamless functionality to engage users effectively.

Developer Challenges: Secondly, Android developers are tasked with the job of creating applications that scale well across the board, considering a wide field of devices that come with limited amounts of internal storage. Variations in hardware, screen size, and amount of storage have placed increasing demands on developers to find flexible and efficient means of storing data on Android, ensuring optimal performance regardless of the device type.

Key Strategies for Optimizing Android Storage

1. Using Scoped Storage for Security and Efficiency

Moving to scoped storage, it was an important behavior change that was introduced with time in Android 10, that fundamentally altered how apps share files and access external data. Apps used to have nearly free run of the device, for better or worse, due to the previous paradigm. In contrast, scoped storage provides a restricted environment whereby an app is only allowed to access specific directories. 

In addition, developers should migrate their applications to scoped storage to align with the privacy standards set by Google. This scoped storage not only restricts data access but also increases user control over which data can be shared, hence improving trust and security.

For instance, the MediaStore API can be used to manage user media files, such as photos and videos, without having direct access to other sensitive files. This API is quite handy in interacting with media files while abiding by scoped storage guidelines.

Real-World Example:

Applications such as Spotify and WhatsApp serve as examples for the successful usage of scoped storage to adapt with extended standards of privacy protection under the Android environment. It isolates apps from any interaction with external files or system data other than the ones they actually have created. For example, WhatsApp by default keeps all of its files in its scoped storage but does allow users to store media outside of it on the device, depending on their choice. This balances security and user control, enabling these apps to scale to millions of users while keeping both performance and privacy.

2. Effective Strategy for Caching Data

In order to optimize app performance and user experience in data-heavy applications, effective caching strategies play a vital role. Caching is a critical method for enhancing mobile app performance, especially in data-heavy apps. Cache storage temporarily holds frequently accessed data, reducing the need to repeatedly fetch it from remote servers or databases, thus improving speed and responsiveness. However, without proper management, caches can grow uncontrollably, leading to unnecessary storage consumption and slower app performance.

Best Practices for Caching:

Caching is best implemented by apps themselves, so by thoughtfully managing caching, apps can enhance performance and optimize user experience while conserving device resources.

A good example would be Youtube, which is an adaptive caching through its Smart Downloads feature. This functionality downloads and caches recommended videos, ensuring they are available for users even without internet connectivity. Additionally, YouTube’s approach optimizes cache size based on available storage, preventing bloat and performance regressions while maintaining a seamless user experience.

3. Using Cloud Integration to Expand Storage

Cloud storage solutions have revolutionized how apps manage data, giving a practical way in which the limitations brought about by local device storage can be overcome. By using the cloud, applications can offset large files and backups, thus helping the application run on devices with constrained storage smoothly. However, it’s important to note that cloud integration often benefits apps when there is a backend server for doing the processing.

For instance, there is Google Photos for seamless cloud integration. The app itself not only saves the local device from storage pressure by backing up the photos and videos on the cloud but also provides an opportunity for the backend servers to process the content by automatically adding tags, geolocation metadata, and other contextual information that enhance the power of search and retrieval. This processing, which would be inefficient or impossible on a local device, greatly improves the user experience by offering faster and more accurate search results.

Best Practices for Cloud Integration:

  • Selective Syncing: Allow users to decide which data gets uploaded to the cloud and which remains local, giving them greater control over their storage.
  • On-Demand Downloads: Only fetch data from the cloud when necessary to minimize storage usage on the device.
  • Real-Time Updates: Implement real-time synchronization with cloud storage to ensure that data remains up-to-date without manual intervention.
  • Enhanced User Privacy: Use encryption and secure transfer protocols to protect user data both in transit and at rest.

So by utilizing cloud storage effectively, developers can optimize app performance, conserve local device resources, and unlock advanced functionalities through server side processing. This strategy is particularly valuable for apps managing large media files or requiring computationally intensive features that extend beyond the capabilities of a mobile device.

Advanced Solutions: Beyond Traditional Storage Management

Custom Scoped Storage Management 

While the above solutions use already existing methods to improve Storage Management on device, as the application scales, it becomes harder to manage storage at an app level with multiple sub products and services competing for the same storage space.

As applications are sandboxed since Android 9, developers have 2 main directories to store files.

Context.getFilesDir() returns a directory within the app’s sandbox where devs can store files related to the app. These files are generally only deleted when the app is uninstalled or all data of the app is cleared.
Context.getCacheDir() returns a similar directory but where cached files are stored. Cached files should be cleaned up by the app, but they can also be cleaned up by the OS or other third party storage cleaner apps.

As the app scales, a way to provide better storage management would be to provide a single entry point or service that acts as a Storage Layer above Android’s APIs.
The Storage Layer can then provide managed subdirectories to products and services, under the Cache or Files app sandbox directories based on configuration.

This API layer has many advantages:

  1. Ownership: The subdirectory requested by the product or service has clear ownership of it and all files under it. No other product or service should access or make changes within this directory
  2. Automatic cleanup: A great advantage of having a managed directory is that it can be automatically cleaned up after use. The configuration can have a parameter which states how long the data should be kept, which prevents stale data from taking up precious space on device
  3. Limits: Having managed partitioned directories means that it is possible to set limits to the data contained within it. Once the limit is exceeded, the directory can be cleaned up. Additionally, other cleanup algorithms can also be used to retain and re-use individual files in the directory which are frequently used, such as LRU based cleanup
  4. Versioning: App scaling and growing over time can mean changes to the data being stored, additional metadata or entire change to the storage itself. These can be versioned from the Storage Layer with migrators in place to move data between versions.
  5. User Scoping: An additional boon to having managed storage is User Scoped storage.
    Products and Services that have user data can be stored to UserScoped subdirectories, which can be auto cleaned up when the user logs out or switches. This significantly boosts the privacy of the app by ensuring no user data is kept once the user removes their account.

Conclusion: Towards Smart Storage Ecosystem

In conclusion, the Android mobile device storage landscape is evolving at a very fast pace. Optimizing storage in today’s world is no longer about just managing space; rather, it has to do with creating intelligent, scalable systems that balance user expectations with app performance. The more complex mobile apps are getting the greater the demand for strong storage solutions which can scale across millions of devices.

Further, developers are armed with a host of other features, from scoped storage to custom storage management  optimizations and embracing cloud-based solutions. These innovations ensure that the developers create applications that scale efficiently and offer seamless experiences that keep users coming back for more.

However, the big question into the future is, with further development in AI and cloud computing, how will these continue to redefine mobile app experiences and change the way we use our devices? The answer will likely depend on continued innovation and collaboration across the entire Android ecosystem.

What is Data Privacy?

Data privacy, sometimes called information privacy, simply means you get to decide who sees your personal information and what they do with it. Your name, email, credit card number, and even your fingerprints all count as personal data, and you should have a say in how that data is gathered, kept, and, of course, used.

Because business relies on customer insights, many companies routinely collect details such as email addresses, online activity, and payment information. For them, honouring data privacy means asking clear permission before they process that data, locking it up so outsiders cannot misuse it, and giving people easy ways to update or delete their information.

Laws like the General Data Protection Regulation, or GDPR, actually require some firms to respect these privacy rights. Yet even brands not covered by formal rules still gain from strong privacy practices. The tools and habits that guard customer confidentiality also form a sturdy shield against hackers chasing sensitive data.

Data Privacy Versus Data Security

Although people often mix them up, data privacy and data security cover different ground yet work hand-in-hand. Together, they form a key part of how any solid company manages its data.

Data privacy is all about the rights of the people whose information is gathered, stored, and used. From a business viewpoint, that means putting in place rules and steps that let users see, change, or delete their data as the law requires.

Data security, on the other hand, zeroes in on keeping information safe from hackers, careless staff, or anyone else who shouldn’t get in. Inside a company, securing data usually comes down to firewalls, encryption, access passwords, and regular system checks.

Since security keeps intruders away, it naturally helps protect users’ personal details. At the same time, privacy guidelines spell out who should see that data and why, so security measures aim their shields in the right direction.

Data Privacy vs. Data Security

Even though the terms data privacy and data security often show up together, they mean different things. You really need both to build a strong data governance plan.

Data privacy is all about the rights of the people whose information you collect-the users themselves. For a company, that means having clear rules and steps that let those people see, change, or delete their data, all while staying within the law.

Data security, on the other hand, zeroes in on keeping that data safe from anyone who shouldn’t see it, whether a hacker from outside or a sneaky employee inside. For the business, this usually means firewalls, encryption, and other tools that lock down information so tampering is much harder.

The two work hand in hand. Strong security makes sure only trusted workers get to look at personal data when they need to, while clear privacy rules spell out who those trusted workers are and why they can peek.

Access

People deserve to see the personal data a company holds about them, and they should be able to do it whenever they want. When they find mistakes or simply want to change something, updating that data should be just as easy.

Transparency

Customers also have the right to know who else has their data and exactly what those people are doing with it. When information is first collected, businesses must spell out what they are taking and how they plan to use it, not hide it in fine print. Afterward, firms should keep users posted about any important changes, including new ways the data will be used or new companies it will be sent to.

Inside a company, there should be a living list of all the data it holds so that everyone agrees on what is kept and why. Each piece of data can then be labeled by its type, sensitivity level, and any laws it must follow. Finally, rules on who can see and use that data should match those labels and be enforced at all times.

Consent

Before storing, collecting, sharing, or processing any personal data, organizations should ask users for clear, honest consent. If a group relies on consent to keep records, it must also respect the users right to change their mind later.

When consent is absent, a company must still show a strong reason for carrying on-such as meeting a legal duty or serving the public good. Users must be able to raise questions, lodge objections, and withdraw permission easily, without jumping through countless hoops.

Quality

A team that treats personal data responsibly also works to keep that information accurate, up to date, and free of mistakes. Even small errors can cause serious harm; a wrong address may send sensitive documents to the wrong doorstep, leaving the real owner in the dark. Regular checks and a culture of care help reduce these risks, protecting both users and the organisation.

Collection, retention and use limitation

Every time a business gathers personal data, it should first ask, Why do I need this? Once the reason is clear, that same reason should be shared with users, and the data must be used only for that goal. To avoid gathering needless information, the company should limit its collection to what is absolutely necessary, and it should delete records as soon as the original purpose is satisfied.

Privacy by design

Privacy should not be an afterthought; it must be built into every system, app, and process from day one. New products and features should always start with a privacy checklist, making sure users’ data is treated as a valuable asset. Whenever possible, data collection should be opt-in, so users actively agree instead of having to search for a way to say no. Throughout the entire journey, customers should feel that they are in the driver’s seat with their own information.

Security

Protecting customer data goes beyond asking employees to be careful; organizations need solid processes and technical controls that guard confidentiality and keep information intact. This might include encrypting data at rest and in transit, using strong access controls, and regularly testing for weaknesses.

At the practical level, companies can train staff on privacy rules, review vendor agreements for data safeguards, and partner only with suppliers that share a serious commitment to protecting users.

When it comes to tech-based shields for sensitive information, companies have plenty of options. Identity and Access Management, or IAM, makes sure only the right people see certain files by following role-based access rules. Authentication extras, such as Single Sign-On and Multi-Factor Authentication, act like extra door locks that block thieves from stealing a legit users password.

Data Loss Prevention, usually short-handed as DLP, scans for private information, labels it, watches how it gets used, and stops anyone from mis-editing, sharing, or outright deleting it. Regular backups and archiving systems provide a safety net, letting businesses retrieve accidentally erased or corrupted data.

For teams worried about following legal rules, there are specialised data-security suites built just for that purpose. They bundle encryption, automatic policy checks, and detailed audit logs that record every important move the data makes.

Why Data Privacy Matters

Modern companies gather huge piles of customer information every single day. Because of that, they need to guard that data carefully. They don’t do it just because it sounds nice; they do it to meet laws, keep hackers out, and stay ahead of rivals.

Laws That Put Privacy First

Groups like the UN call privacy a basic human right. Because of this idea, many nations have passed laws that turn that right into legal rules. Break the rules, and angry regulators will hit you with eye-watering fines.

One of the toughest of these laws is the European Union’s GDPR. It spells out exactly how any business, no matter where it sits, must handle the data of EU customers. Fail to follow the rules and you could lose up to 20 million euros or 4% of your total global income.

Outside Europe, other places have their own privacy rules, such as the UK GDPR, Canada’s PIPEDA, and India’s new Digital Personal Data Protection Act.

The United States still lacks a single, broad federal privacy law like Europe’s GDPR, but several narrower rules are on the books. The Children’s Online Privacy Protection Act (COPPA), for instance, tells websites what they can and can’t do with data from kids younger than 13. Healthcare privacy is handled by the Health Insurance Portability and Accountability Act (HIPAA), which guides hospitals, insurers, and vendors in storing and sharing medical records.

Violating these laws can cost companies a lot of money. In 2022 Epic Games paid a staggering $275 million after regulators found it had broken COPPA.

At the state level, the California Consumer Privacy Act (CCPA) arms Californians with extra say over how businesses collect and use their information. Though the CCPA gets most of the spotlight, it has motivated other states, including Virginia with its Virginia Consumer Data Protection Act (VCDPA) and Colorado with the Colorado Privacy Act (CPA), to roll out similar rules.

Security posture

Most businesses gather a mountain of personal information, including customers’ Social Security numbers and bank account details. Because of that treasure chest, cybercriminals keep aiming their sights on this data, turning it into stolen identities, drained accounts, or fresh listings on the dark web.

Beyond client info, many firms also guard their own secrets, such as trade secrets, patents, and sensitive financial records. Hackers see any valuable data, old or new, as fair game and will try every trick to get in.

The 2024 IBM Cost of a Data Breach report says the typical incident now sets an organization back US$ 4.45 million. Downtime, forensic investigations, regulatory fines, and lost trust all stack up and keep that number growing.

Fortunately, tools built for privacy double as powerful defenses. User access controls stop outsiders before they ever touch sensitive files, and many data monitors spot odd behavior early so that response teams can jump in sooner. Investing in these shared technologies helps lower breach odds while keeping regulatory promises intact.

Workers and shoppers alike can protect themselves from nasty social-engineering scams by following simple data-privacy tips. Fraudsters dig through social-media accounts to find personal details, then use that info to build realistic business-email-compromise (BEC) and spear-phishing scams. By posting less online and tightening privacy settings, people take away a key fuel that lets crooks craft these convincing attacks.

Competitive Advantage

Putting user privacy front and center can actually give a business a heads-up over its rivals.

When companies drop the ball on data protection, customers lose faith fast. Remember how Facebook’s name tanked after the Cambridge Analytica mess? Once burned, many shoppers are hesitant to hand their info to brands with a shaky privacy record.

On the flip side, firms known for strong privacy guardrails find it much easier to collect and use customer data.

In today’s linked economy, bits and bytes zip from one company to another every second. A retailer might save contact lists in the cloud or send sales figures to a third-party analyst. By weaving solid privacy rules into these processes, organizations can lock down data and guard it from prying eyes even after handing it off. Laws like Europe’s GDPR remind everyone that, in the end, the original company is still on the hook if a vendor leaks information.

New generative A.I. tools can quickly turn into privacy headaches. Plug in sensitive info, and that data might end up in the models training set, often beyond the company’s reach. A well-known case at Samsung showed how easily this can happen: engineers pasted proprietary source code into ChatGPT, seeking tweaks, and ended up leaking the very code they meant to protect.

Beyond that, running anyones data through these systems without their clear OK can cross the line under many privacy rules.

Strong, formal privacy policies and clear controls let teams use generative AI and other cutting-edge tech without losing user trust, breaking the law, or mishandling confidential data.

How to Influence as a Product Designer: 5 Approaches

Author: Oleksandr Shatov, Lead Product Designer at Meta

***

Before we begin, it is crucial to clarify who an influential product designer is; this specialist consistently makes big decisions for product strategy. Collaborating with design teams at Meta daily, I have noticed that influential designers share five behaviours:

  1. Doing the hard work 
  2. Being a simplifier
  3. Making others successful 
  4. Building trust 
  5. Communicate more 

Let us elaborate on each. 

Do the hard work. 

An influential product designer does not merely point to what is wrong and immediately escalate the issue. That will lead to teams not responding with actions. Instead, I suggest trying this approach: 

  1. Highlight the problem with context: for example, showing metrics (X% decline) or user feedback. You may also connect the issue with business goals or desirable outcomes (“This leads to a drop-off in Q3 revenue targets”). 
  2. Step back to analyse: identify root causes, research other products’ experiences with the same problem, and consider technical limitations. 
  3. Understand the potential: you need to analyse whether the issue is significant. For this, you need to determine the consequences the problem will lead to in the long run and assess risks. 
  4. Come with proposals: present 2-3 ideas, including information about effort estimates and resource requirements. 

An influential product designer would provide concrete actions: “According to user testing results, 7 out of 10 users cannot find the save button. I have designed two alternative layouts that address the issue”. 

Be a simplifier 

Companies need product designers who simplify complex ideas and make them clear to everyone. To master the skill,

  1. Choose problems others care about.
  2. Break down the challenge into core components: start with the big picture and then analyse it more thoroughly. 
  3. Make the issues and the solutions understandable: provide explanations and write clear explanations.

Using this approach, an influential product designer will make teams listen.

Make others successful. 

Set your mind to amplifying others and answering the questions: What can I do to be helpful to them? How can you support your growth? For instance, you may mentor junior product designers.

Do not focus only on your goals – a product designer will not influence people. Instead, support others. 

Build trust. 

Building trust with colleagues or leaders in the new team is necessary before any decision-making. To accelerate trust as an influential product designer, schedule 1:1 calls with teams and stakeholders to understand their challenges. This approach will help to make an impact. 

Communicate more. 

Overcommunication does not mean being annoying. In fact, it helps you understand the issue and be understood by others. 

Do not be afraid to ask open-ended questions and be curious: What marks the feature’s success? What constraints should I be aware of when designing this feature? 

Do not hesitate to contact people, but ensure you explain things clearly. Share your updates and explorations, not only finished work. 

I hope it was helpful! What tips helped you influence? Please, share them in the comments. Let us connect if you want to learn more about what I have learned about design, growth, and my career 🙂

What is LLMOps, MLOps for large language models, and their purpose

Why manage transfer learning of large language models and what is included in this management: getting acquainted with the MLOps extension for LLM called LLMOps.

How did LLMOps come to be? 

Large language models, embodied in generative neural networks (ChatGPT and other analogues), have become the main technology of the outgoing year, which is already actively used in practice by both individuals and large companies. However, the process of training LLM (Large Language Model) and their implementation in industrial use must be managed in the same way as any other ML system. A good practice for this has become the MLOps concept, aimed at eliminating organizational and technological gaps between all participants in the development, deployment and operation of machine learning systems.

As the popularity of GPT networks and their implementation in various application solutions grows, there is a need to adapt the principles and technologies of MLOps to transfer learning used in generative models. This is because language models are becoming increasingly large and complex to maintain and manage manually, which increases costs and reduces productivity. To avoid this, LLMOps, a type of MLOps that oversees the LLM lifecycle from training to maintenance using innovative tools and methodologies, can help.

LLMOps focuses on the operational capabilities and infrastructure required to fine-tune existing base models and deploy these improved models as part of a product. Because base language models are huge, such as GPT-3, which has 175 billion parameters, they require a huge amount of data to train, as well as time to map the computations. For example, it would take over 350 years to train GPT-3 on a single NVIDIA Tesla V100 GPU. Therefore, an infrastructure that can run GPU machines in parallel and process huge data sets is essential. LLM inference is also much more resource-intensive than more traditional machine learning, as it is not a single model, but a chain of models.

LLMOps provides developers with the necessary tools and best practices for managing the LLM development lifecycle. While the ideas behind LLMOps are largely the same as MLOps, large base language models require new methods, guidelines, and tools. For example, Apache Spark in Databricks works great for traditional machine learning, but it is not suitable for fine-tuning LLMs.

LLMOps focuses specifically on fine-tuning base models, since modern LLMs are rarely trained entirely from scratch. Modern LLMs are typically consumed as a service, where a provider such as OpenAI, Google AI, etc. offers an API of the LLM hosted on their infrastructure as a service. However, there is also a custom LLM stack, a broad category of tools for fine-tuning and deploying custom solutions built on top of open-source GPT models. The fine-tuning process starts with an already trained base model, which then needs to be trained on a more specific and smaller dataset to create a custom model. Once this custom model is deployed, queries are sent and the corresponding completion information is returned. Monitoring and retraining a model is essential to ensure its consistent performance, especially for LLM-driven AI systems.

Rapid engineering tools allow contextual training to be performed faster and cheaper than fine-tuning, without requiring sensitive data. In this case, vector databases extract contextually relevant information for specific queries, and prompt queries can optimize and improve model output based on patterns and chaining.

Similarities and differences with MLOps

In summary, LLMOps facilitates the practical application of LLM by incorporating operational management, LLM chaining, monitoring, and observation techniques that are not typically found in conventional MLOps. In particular, prompts are the primary means by which humans interact with LLMs. However, formulating a precise query is not a one-time process, but is typically performed iteratively, over several attempts, to achieve a satisfactory result. LLMOps tools offer features to track and version prompts and their results. This facilitates the evaluation of the overall performance of the model, including operational work with multiple LLMs.

LLM chaining links multiple LLM invocations in a sequential manner to provide a single application function. In this workflow, the output of one LLM invocation serves as the input to another to produce the final result. This design approach represents an innovative approach to developing AI applications by breaking down complex tasks into smaller steps. Chaining removes the inherent limitation on the maximum number of tokens that LLM can process simultaneously. LLMOps simplifies chaining management and combines it with other document retrieval methods, such as vector database access.

LLMOps’s LLM monitoring system collects real-time data points after a model is deployed to detect degradation in its performance. Continuous, real-time monitoring allows you to quickly identify, troubleshoot, and resolve performance issues before they affect end users. Specifically, prompts, tokens and their length, processing time, inference latency, and user metadata are monitored. This allows you to notice overfitting or changing the underlying model before performance actually degrades.

Monitoring models for drift and bias is also critical. While drift is a common problem in traditional machine learning models, as we’ve written about here, monitoring LLM solutions with LLMOps is even more important due to their reliance on underlying models. Bias can arise from the original datasets on which the base model was trained, custom datasets used for fine-tuning, or even from human evaluators judging fast completion. A thorough evaluation and monitoring system is needed to effectively remove bias.

LLM is difficult to evaluate using traditional machine learning metrics because there is often no single “right” answer, whereas traditional MLOps relies on human feedback, incorporating it into testing, monitoring, and collecting data for use in future fine-tuning.

Finally, there are differences in the way LLMOps and MLOps approach application design and development. LLMOps is designed to be fast, whereas traditional MLOps projects are typically iterative, starting with existing proprietary or open-source models and ending with custom fine-tuned or fully trained models on curated data.

Despite these differences, LLMOps is still a subset of MLOps. That’s why the authors of The Big Book of MLOps from Databricks have included the term in the second edition of this collection, which provides guiding principles, design considerations, and reference architectures for MLOps.

Data Fabric and Data Mesh: Complementary Forces or Competing Paradigms?

As the world continues to change, two frameworks have emerged to help businesses each manage their data ecosystems – Data Fabric and Data Mesh. While both these frameworks aim to simplify a business’s data governance, integration, and access, they differ quite a lot in their philosophy and how they operate. Data Fabric focuses more on technological orchestration over a distributed environment. Alternatively, Data Mesh focuses more on structural decentralization and domain-centric autonomy. This article looks at the powerful cloud-based architecture that integrates these two frameworks through its definitions, strengths, limitations, and the potential for synergy.

What is Data Fabric?

The Data Fabric concept originated in 2015 and came into focus after Gartner included it in the top analysis trends of 2020. In the DAMA DMBOK2 glossary, data architecture is defined as the plan for how to manage an organization’s data assets in a way that model of the organization’s data structures. Data Fabric implements this by offering a unified framework that automatically and logically integrates multiple disjointed data systems into one entity. 

Simply put, Data Fabric is a singular architectural layer that sits on top of multiple heterogeneous data ecosystems – on-premises systems, cloud infrastructures, edge servers –  and abstracts their individual complexities. It uses and combines several data integration approaches like the use of special data access interfaces (APIs), reusable data pipelines, automation through metadata, and AI orchestration to provide and facilitate non-restricted access and processing. Unlike older methods of data virtualization, which assisted in constructing a logical view, Data Fabric combines with the essence of containerization, which allows better management, control, and governance making masking it more powerful for modernizing applications than traditional methods.

Key Features of Data Fabric

  • Centralized Integration Layer: A virtualized access layer unifies data silos, governed by a central authority enforcing enterprise standards.
  • Hybrid Multi-Cloud Support: Consistent data management across diverse environments, ensuring visibility, security, and analytics readiness.
  • Low-Code/No-Code Enablement: Platforms like the Arenadata Enterprise Data Platform or Cloudera Data Platform simplify implementation with user-friendly tools and prebuilt services.

Practical Example: Fraud Detection with Data Fabric

Consider a financial institution building a fraud detection system:

  1. An ETL pipeline extracts customer claims data from multiple sources (e.g., CRM, transaction logs).
  2. Data is centralized in a governed repository (e.g., a data lake on Hadoop or AWS S3).
  3. An API layer, enriched with business rules (e.g., anomaly detection logic), connects tables and exposes the unified dataset to downstream applications.


While this approach excels at technical integration, it often sidesteps critical organizational aspects – such as data ownership, trust, and governance processes—leading to potential bottlenecks in scalability and adoption.

How Data Mesh Works

Data Mesh, introduced around 2019, is a new framework of data architecture that puts a greater emphasis on people rather than technology and processes. Like DDD, Data Mesh advocates for Domain-oriented decentralization, which promotes the fragmentation of data ownership among business units. Unlike Data Fabric, which controls everything from a single point, Data Mesh assigns domain teams with the responsibility of treating data as a product that can be owned, accessed, and interacted with in a self-service manner. 

Core Principles of Data Mesh

  • Domain-Oriented Decentralization: The closest teams to the data, whether it be its consumption or generation, have the ownership and management of the data. 
  • Data as a Product: More than just a simple dataset, each dataset can be marketed and comes with features such as access controls and metadata. 
  • Self-Service Infrastructure: Centralized domain teams are able to function autonomously because of a centralized platform. 
  • Federated Governance: Domains without a central data governance point are controlled centrally in terms of standards, data policies, and interfacing.

Practical Example: Fraud Detection with Data Mesh

Using the same fraud detection scenario:

  1. A domain team (e.g., the claims processing unit) defines and owns an ETL/ELT job to ingest claims data.
  2.  Datasets (e.g., claims, transactions, customer profiles) are stored separately, each with a designated owner.
  3.  A data product owner aggregates these datasets, writing logic to join them into a cohesive fraud detection model, delivered via an API or event stream.

This approach fosters accountability and trust by embedding governance into the process from the outset. However, its reliance on decentralized teams can strain organizations lacking mature data cultures or robust tooling.

Emerging Tools

Data Mesh is still maturing technologically. Google’s BigLake, launched in 2022, exemplifies an early attempt to support Data Mesh principles by enabling domain-specific data lakes with unified governance across structured and unstructured data.

Data Fabric works best with complex siloed infrastructures since it offers a top-down approach to data access. On the other hand, Data Mesh performs well in decentralized organizations that are willing to undergo a cultural shift and give more emphasis on trust and agility as compared to technical standardization.

Just like data fabric and data mesh, enterprise operational context and digital transformation journey determines the scope of its existence. The cloud provides a platform where both approaches can be integrated. Consider an architecture where there exists an event bus (for example Apache Kafka), which streams data to many different consumers. The consumers could include AWS S3, which acts as a data lake, and ETL pipelines (AirFlow for batch and NiFi for streaming), which serve to integrate operational and historical data. Add a robust Master Data Management (MDM) layer and analytics will be of good quality. 

This is the integration point where synergy shines: the centralized integration of data fabric sets up the infrastructure and data mesh domain autonomy makes it possible to innovate. A cloud native application platform which enables and controls innovation is the result. Business Intelligence (BI) dashboard is an example, which could be drawing the Mesh IoT dashboard clean data products, while Fabric governs seamless access to data.

A Call to Innovate

Marrying these paradigms isn’t without hurdles. Architects and engineers must grapple with:

  • Migration Complexity: How do you transition on-premises data to the cloud without disruption?
  •  Real-Time vs. Batch: Can the platform balance speed and depth to meet business demands?
  •  Data Quality: How do you embed quality checks into a decentralized model?
  •  Security and Access: What federated security model ensures ease without compromising safety?
  •  Lifecycle Management: How do you govern data from creation to destruction in a hybrid setup?


Moreover, the cloud isn’t a silver bullet. Relational databases often fall short for advanced analytics compared to NoSQL, and data lake security models can hinder experimentation. Siloed data and duplication further complicate scalability, while shifting from centralized to decentralized governance requires a cultural leap.

The Verdict: Together, Not Versus

So, is it Data Fabric versus Data Mesh? These methods have no real conflict; they work hand in hand. Data Fabric shows the threads of a technology metaphor for a superordinate access to information, while Data Mesh gives authority to the operational teams to manage their data. In a cloud-powered ecosystem, they have the potential to revolutionize data management by merging centralization’s productivity with decentralization’s creativity. The challenge that arises is not what to select, but how to combine the multifarious assets into a harmonious orchestra that nurtures trust, economic agility, and value to the enterprise. As the instruments undergo changes and institutions transform, these two concepts may as well be the paradigm shift that data architecture has long been waiting for, shaken, stirred and beautifully blended.

UnlockED Hackathon

Shaping the Future of Education with Technology – February 25-26, 2024

ExpertStack proudly hosted the UnlockED Hackathon, a high-energy innovation marathon focused on transforming education through technology. With over 550 participants from the Netherlands and a distinguished panel of 15 judges from BigTech, the event brought together some of the brightest minds to tackle pressing challenges in EdTech.

The Challenge: Reimagining Education through Tech
Participants were challenged to develop groundbreaking solutions that leverage technology to make education more accessible, engaging, and effective. The hackathon explored critical areas such as:

  • AI-powered personalized learning – Enhancing student experiences with adaptive, data-driven education.
  • Gamification & immersive tech – Using AR/VR and interactive platforms to improve engagement.
  • Bridging the digital divide – Creating tools that ensure equal learning opportunities for all.
  • EdTech for skill-building – Solutions focused on upskilling and reskilling for the digital economy.

For 48 hours, teams brainstormed, designed, and built innovative prototypes, pushing the boundaries of what’s possible in education technology.

And the Winner is… Team OXY!
After an intense round of final presentations, Team OXY took home the top prize with their AI-driven adaptive learning platform that personalizes study plans based on real-time student performance. Their solution impressed judges with its scalability, real-world impact, and seamless integration with existing education systems.

Driving Change in EdTech
The UnlockED Hackathon was more than just a competition—it was a movement toward revolutionizing education through technology. By fostering collaboration between developers, educators, and industry leaders, ExpertStack is committed to shaping a future where learning is smarter, more inclusive, and driven by innovation.

Want to be part of our next hackathon? Stay connected and join us in shaping the future of tech! 🚀

Mobile Development Trends to Follow in 2024

The mobile development industry witnessed tremendous innovations and shifts in the year 2023. The implementation of new programming languages and libraries, as well as the introduction of new tools and technologies, results in rapid growth in mobile development. Looking into 2024, the mobile development sector will be influenced by these several trends. Here’s what technologies to focus on, what will be in demand, and what developers need to monitor.

Artificial Intelligence: A Game-Changer in Mobile Development

Artificial Intelligence (AI) remains the most prominent and promising direction in mobile development for 2024. According to the McKinsey Global Institute, AI has the potential to increase company profits by $4.4 trillion annually. As AI continues to revolutionize industries across the board, mobile developers are increasingly incorporating generative AI into everyday applications.

Machine Learning (ML) technologies have already found a strong foothold in mobile development. Libraries for image recognition, text scanning, voice and video processing, and tools for improving app performance, such as those combating memory leaks, are just the beginning. FaceID, for instance, relies heavily on ML to enable secure and seamless authentication.

Since late 2022, there has been an explosion in the development of AI systems, and generative AI technologies like ChatGPT are now at the forefront of this revolution. In the Russian market, Yandex’s Yandex GPT is also making waves. Moving into 2024, companies are increasingly integrating AI-based solutions into their apps. This integration is not limited to improving user experiences through better content recommendations and personalized services but extends to tasks like automatic translation and smart user interaction.

For mobile developers, AI’s role is expanding beyond app functionality to the very development process itself. AI-driven tools are now capable of generating and optimizing code, which could significantly speed up the development cycle. However, questions still remain about the quality and safety of AI-generated code—particularly in terms of security and performance. Despite these concerns, the trend toward AI-enhanced development is undeniable and promises to evolve further in 2024.

Cross-Platform Development: Kotlin Multiplatform and Flutter Take Center Stage

Cross-platform mobile development has been a growing trend for several years, allowing developers to build apps for multiple platforms with a single codebase. In 2024, two solutions are leading the charge: Kotlin Multiplatform (KMP) by JetBrains and Flutter by Google. These technologies are not only popular but continue to evolve and improve, making them key players in the cross-platform development space.

Kotlin Multiplatform (KMP) is gaining traction as a versatile SDK for developing apps across Android, iOS, and other platforms. With its strong Kotlin ecosystem, KMP allows developers to share code between platforms while maintaining the ability to write platform-specific code where necessary. The result is a streamlined development process that reduces redundancy without sacrificing performance.

Flutter, Google’s open-source UI toolkit, is another heavyweight in the cross-platform development world. Known for its fast development cycle and rich set of pre-designed widgets, Flutter continues to evolve with regular updates that enhance its capabilities, performance, and integration with various platforms. Flutter’s flexibility makes it an appealing choice for developers seeking to create beautiful, natively compiled applications for mobile, web, and desktop from a single codebase.

Both KMP and Flutter will see further advancements in 2024, with JetBrains conducting an annual survey to gauge the growing popularity of Kotlin and KMP. Expect these tools to introduce new features, optimizations, and capabilities that make cross-platform mobile development even more efficient and powerful.

Source: https://www.jetbrains.com/lp/devecosystem-2023/kotlin/

In the fall of 2023, something that many developers and fans of KMP technology were waiting for happened. The technology moved to the Stable status and became completely ready for full use. This means that many issues that were relevant for the Alpha and Beta versions of the SDK were resolved. For example, they improved support for Kotlin/Native, work with multithreading, memory, etc. One of the goals of the roadmap for 2024 was to implement direct interaction between the Kotlin and Swift languages. Despite the fact that Google developers are the authors of the competing product Flutter, the company officially places a big bet on KMP. Cross-platform support is included in many solutions. So we advise you to take a closer look at this product.

Another trend related to cross-platform development is the use of Compose Multi Platform to implement a cross-platform UI. This declarative framework combines technologies such as Compose for Desktop, Compose iOS, Compose for Web, and Jetpack Compose for Android, and allows you to quickly and relatively easily create a common UI for different platforms. In 2023, the alpha version of Compose for iOS was released. In 2024, the Beta version is expected to be released, which will include improvements in working with the native iOS UI, as well as support for a cross-platform solution for navigation.

In global trends in mobile development, solutions such as React Native, hybrid development on Cordova and ionic, and Xamarin remain popular. The trend for PWA development also continues.

Native development

Technologies and frameworks change, native development remains. This is the foundation of foundations, the base that every developer should know. Using native languages ​​and tools, as well as the Native First approach, will always be relevant. In addition, for the implementation of projects with complex logic, complex UI, it is worth choosing native development.

Every year, developers of iOS/Android platforms, as well as Swift and Kotlin languages, release many new and interesting solutions, which they talk about at thematic conferences WWDC and Google I/O.

Particular attention should be paid to the growing trend towards declarative development in mobile applications. The SwiftUI and Jetpack Compose frameworks are actively developing, steadily improving, becoming more convenient and reliable in operation. They are increasingly used in the development of applications of varying complexity. Many libraries and ready-made solutions tailored for SwiftUI and Jetpack Compose. We can say that this is a new standard for mobile development for Android and iOS.

It is also worth paying attention to supplementing mobile applications with interactive widgets that will help draw attention to applications and provide instant access to a number of functions.

Virtual reality

In 2023, at the WWDC conference, Apple presented one of the most high-profile new products – Vision Pro virtual reality glasses running on the VisionOS platform. Of course, this is far from the first such device in the world, and similar devices have long been used in the gaming industry. The emphasis is on unique immersive technologies, improved sound and image quality. As part of the creation of the device, large-scale developments were carried out in the field of spatial computing. VisionOS support was included in such tools as: ARKit, RealityKit, Unity, RealityComposer, etc. Both the toolkit and documentation are currently available to all interested developers. The start of sales of the device itself is scheduled for 2024-2025.

Recently, the creators of Vision Pro announced the imminent launch of an app store for virtual reality glasses. This means that in 2024 we will see a boom in various applications for AR/VR devices: games, virtual fitting rooms, interior design applications, immersive movie viewing, listening to music, etc. In addition, according to Statista, by 2028 the number of AR and VR market users in the world will reach 3674.0 million users.

Both Google and Apple are also focusing on improving the immersive user experience on standard smartphones, watches, and tablets.

Not just smartphones. IoT

Every year, new various devices running Android and iOS are released. These are not only smartphones and tablets, but also watches, smart TVs, game consoles, fitness trackers, car computers, and smart home devices. OS development companies are interested not only in supporting new capabilities of devices and gadgets, but also in improving the tools for third-party developers.

The use of NFC, Bluetooth in applications is still relevant. Yahoo Finance predicts growth in the NFC field from 2023 to 2030 by 33.1 billion US dollars.

Security, improved network operation

Preservation of confidential information has been and remains one of the main tasks of developers. This is evidenced by the Gartner forecast. The introduction of enhanced security measures (biometric authentication, use of blockchain, etc.) is very relevant in 2024. Also, special attention should be paid to stable and secure operation of the network, including work with cloud services, with NFC, when connecting to other devices. Modern mobile OS offer a wide range of native tools for implementing and supporting secure operation of applications.

Let’s sum it up

In 2024, mobile platforms, languages ​​and development tools will continue to evolve. The main areas that we recommend paying attention to will be:

  • native development;
  • cross-platform;
  • development for various mobile devices, IoT;
  • support and use of Russian technologies;
  • security, network operations;
  • AR/VR.

Breathing New Life into Legacy Businesses with AI

Author: Jenn Cunningham, a Go-to-Market Lead, Strategic Alliances at PolyAI. Currently, she leads strategic alliances at PolyAI, where she manages key relationships with AWS and global consulting partners while collaborating closely with the PolyAI cofounders on product expansion and new market entry. Her unique journey from data science beginnings to implementation consulting gives her a front-row seat to how legacy businesses are leveraging AI to evolve and thrive.

***

When I had just finished my university degree, data science and data analytics were the hot topic, as businesses were ready to become more data-driven organizations. I was so excited to unlock new customer insights and inform business strategy, until my first project. After 8 weeks of cleansing data and crunching numbers for 12 hours a day, it was glaringly obvious that I was entirely too extroverted for a pure data science role. This led me to start a personal research project, exploring how businesses evaluate, implement, and adopt different types of process automation technology as the technology itself continued to evolve.That evolution led me to realize the other data and AI’s capabilities, primarily focusing on what they could do to the operations of businesses labeled legacy – not only for efficiency, but also for improving user service. These companies tend to be branded or perceived as slower to adapt, but they’re full of indisputable value waiting for the right ‘nudge.’

AI is providing that nudge because today, AI does more than automating boring work; it is changing how businesses perceive value. A century-old bank, a global manufacturer, and a regional insurer, these are just a few examples of businesses that are evolving their core AI technologies, improving their internal systems while retaining their rich history. 

This didn’t suddenly happen though, but there were many steps involved, each more groundbreaking than the last. So to truly narrate the current state AI had to evolve in, we need to wind the clocks back to a time when data wasn’t an inevitability but a luxury.

The First Wave: Data as a Project

Back in the infancy of data science within companies, their treatment of data resembled a whiteboard and marker experiment. Businesses seemed lost on what to do with data, and therefore assumed it required a project-like treatment with a start and end, or a PowerPoint presentation in mid — something that showcases interim findings. Along the way, gathering “let’s get a data scientist to look at this” comments became a norm embracing a carefree approach of one-single-to-multi-business domain change. 

In the time of my research, shifts were just beginning where organizations were changing from relying on gut feelings to data informed strategies but everything still felt labored. It was a common practice for clients not too familiar with the processes to take some approaches too literary. As such, I found a case where the clients used the tin can approach in their own manner, printing out .txt files like word documents containing customer interactions and using scissors to post these documents in a conference room where they would visually calculate key metrics, calculators and highlighters in hand. Data science in its untapped, unrefined glory was radical.

The purpose wasn’t to create sustainable systems. Instead, it was to respond to prompts such as “What’s our churn rate?” or “Was this campaign successful?” These questions, while important in their own right, were evasive at best. Each project felt like a fleeting victory without much future potential. There was no reusable framework, no collaboration across teams, and definitely no foresight for what data could evolve into.

However, this preliminary wave had significance as it allowed companies to recognize the boundaries of instinct-driven decision-making and the usefulness of evidence. Although the work was done in stages, it rarely resulted in foundational changes, and even when insights did materialize, they were not capable of driving widespread change.

The Second Wave: Building a Foundation for Ongoing Innovation

Gradually, a new understanding seemed to surface,  one that moved data from being a tactical resource to a strategic asset. In this second wave, companies sought answers to more advanced inquiries. How do we use data to enable proactive decision-making rather than only responsive actions? How can we incorporate insights into the operational fabric of our company? 

Rather than bringing on freelance data scientists on a contractual basis, the companies working with Data at the time transformed their approaches by building internal ecosystems of expertise composed of multidisciplinary teams and fostering a spirit of innovation. Thus, the focus shifted from immediate problem-solving to laying the foundational systems for comprehensive future infrastructure. 

Moreover, data started to shift from the back-office functions to the forefront. Marketing, sales, product, and customer service functions received access to real-time dashboards, AI tools, predictive analytics, and a host of other utilities. Therefore the democratization of data accelerated to bring the power of AI data insights to the decision makers who worked directly with customers and crafted user experiences.

What also became clear during this phase was that not all parts of the organization required the same level of AI maturity at the same time. Some teams were set for complete automation; others just required clean reporting which was perfectly fine. The goal was not standard adoption; rather, it was movement. The most advanced thinking companies understood that the pace of change didn’t have to happen everywhere all at once, it just needed a starting point and careful cultivation.

This was the turning point when data began evolving from a department to a capability; it could now single-handedly drive continuous enhancements instead of relying on project-based wins. That is when the flywheel of innovation had commenced spinning.

The Current Wave: Reimagining Processes with AI

Today, we are experiencing a third and possibly the most impactful wave of change. AI is no longer limited to enhancing analytics and operational efficiency; it now rethinks the very framework of how businesses are structured and run. What was previously regarded as an expenditure is now considered a divisive competitive advantage.  

Consider what PolyAI and Simplyhealth have done. Simplyhealth, a UK health insurer, partnered with PolyAI to implement voice AI within their customer service channels. However, this integration went beyond implementing basic chatbots. The AI was ‘empathetic AI’ since it could understand urgency, recognize vulnerable callers, and make judgment calls on whether patients should be passed to a human auxiliary.  

Everyone saw the difference. There was less waiting around, better call resolution, and most crucially, those that required care from a member of staff received it. Nonetheless, AI did not take the person out of the process; it elevated the person into the process, allowing them to experience empathy and enable humanity to work alongside effectiveness.

Such a focus on building technology around humans is rapidly becoming a signature of AI change in today’s world. You see it with retail AI, which customizes every touchpoint in the customer experience. It’s happening in manufacturing with costs associated with breakdowns being avoided through predictive maintenance. And in financial services, it’s experiencing massive shifts as AI technologies offer personalized financial consulting, fraud detection, and assistance to those missing traditional support.  

In all these examples, AI technologies support rather than replace people. Customer service representatives are equipped with richer context, which augments their responses, freelancers are liberated from doing repetitive work, and strategists get help concentrating on the right resources. Therefore, today’s best AI use cases focus on augmenting human experience instead of reducing the workforce.

Conclusion: 

Too often is the phrase “legacy business” misused to describe something as old-fashioned or boring. But in fact, these are businesses with long-standing customer relationships and histories, enabling them to evolve in meaningful ways.  

Modern AI solutions don’t simply replace manual labor as the advancement from spreadsheets and instinct-based decisions to fully integrated AI systems is more complex. Businesses progressively adopt modern practices all while having a vision and patience in terms of cultural branding. Plus, legacy businesses are contemporarily evolving and keeping up with the pace, and many are leading the race.  

AI today is changing everything and has now become a culture driving system. It impacts the very way we collaborate, deliver services, value customers, and so much more. Whether implementing new business strategies, redefining customer support, or optimizing computer science logistics, AI is proving to be a propellant for transformation focused on humans.  

Further, visionaries and team members who witnessed this automated evolution firsthand felt unity through action, fervently participating as data table-aligned pilots meshed with algorithms and numbers. Reminding us that change isn’t all technical; it’s human. It’s intricate, fulfilling, and simply put: essential.

To sum up, the future businesses are not the newest; rather, they are the oldest that choose to develop with a strong sense of intention behind it. In that development, legacy is not a hindrance, but rather, a powerful resource.

Demystifying Geospatial Data: Tracking, Geofencing, and Driving Patterns

Author: Muhammad Rizwan, a Senior Software Engineer specialising in microservices architecture, cloud-based applications, and geospatial data integration.

In a world where apps and platforms are becoming increasingly location-aware, geospatial data has become an essential tool across industries,ranging from delivery and logistics to personal security, urban planning, and autonomous vehicles. Whether tracking a package, building a virtual fence, or analyzing how a person drives, geospatial data enables us to know the “where” of all things.

This article explores the core concepts of geospatial data, including:

  • Real-time tracking
  • Distance measurement algorithms
  • Types of geofences
  • How to detect if a location is within a geofence
  • Driving behavior and pattern analysis

Understanding Geospatial Coordinates

To make sense of geospatial data, we first need to understand how locations are represented on Earth. Every point on the planet is identified using a coordinate system that provides a precise way to describe positions in space.

At the core of this system are two fundamental values:

  • Latitude (North-South position)
  • Longitude (East-West position)

Together, they form a GeoCoordinate:

public class GeoCoordinate

{

    public double Latitude { get; set; }

    public double Longitude { get; set; }

}

Understanding geospatial coordinates is essential for working with location-based data, but knowing a location alone is not always enough. In many applications, such as navigation, logistics, and geofencing, it is equally important to measure the distance between two points.

How to Measure Distance Between Two Locations

One of the most commonly used methods for calculating the straight-line (“as-the-crow-flies”) distance between two geographical points is the Haversine formula. The following mathematical approach accounts for the curvature of the Earth, making it ideal for accurate distance measurements.

Haversine Formula

Let:

  • φ1,λ1\varphi_1, \lambda_1 = latitude and longitude of point 1 (in radians)
  • φ2,λ2\varphi_2, \lambda_2 = latitude and longitude of point 2 (in radians)
  • Δφ=φ2−φ1\Delta \varphi = \varphi_2 – \varphi_1
  • Δλ=λ2−λ1\Delta \lambda = \lambda_2 – \lambda_1
  • RR = Earth’s radius (mean radius = 6,371,000 meters)

Then:

a=sin⁡2(Δφ2)+cos⁡(φ1)×cos⁡(φ2)×sin⁡2(Δλ2) a = \sin^2(\frac{\Delta \varphi}{2}) + \cos(\varphi_1) \times \cos(\varphi_2) \times \sin^2(\frac{\Delta \lambda}{2}) c=2×atan2⁡(a,1−a) c = 2 \times \operatorname{atan2}(\sqrt{a}, \sqrt{1 – a}) Distance=R×c \text{Distance} = R \times c

Implementation in C#

public static class GeoUtils

{

    private const double EarthRadiusMeters = 6371000;

    public static double DegreesToRadians(double degrees)

    {

        return degrees * (Math.PI / 180);

    }

    public static double HaversineDistance(double lat1, double lon1, double lat2, double lon2)

    {

        double dLat = DegreesToRadians(lat2 – lat1);

        double dLon = DegreesToRadians(lon2 – lon1);

        double radLat1 = DegreesToRadians(lat1);

        double radLat2 = DegreesToRadians(lat2);

        double a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) +

                   Math.Cos(radLat1) * Math.Cos(radLat2) *

                   Math.Sin(dLon / 2) * Math.Sin(dLon / 2);

        double c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 – a));

        return EarthRadiusMeters * c;

    }

}

Example:

double nyLat = 40.7128, nyLng = -74.0060;

double laLat = 34.0522, laLng = -118.2437;

double distance = GeoUtils.HaversineDistance(nyLat, nyLng, laLat, laLng);

Console.WriteLine($”Distance: {distance / 1000} km”);

Accurately measuring the distance between two points is a fundamental aspect of geospatial analysis, enabling uses ranging from navigation and logistics to geofencing and autonomous systems. To Elaborate, the Haversine formula provides a valid method of calculating straight-line distances by accounting for the curvature of the Earth and is therefore a standard method used in numerous industries. However, for more precise calculations for real-world usage such as road navigation or route planning based on terrain, other models like the Vincenty formula or graph-based routing algorithms may be more suitable.

By mastering and applying these techniques of distance calculation, we can increase the precision of location-based services and decision-making in spatial applications. Furthermore, with the ability to accurately measure distances between two points, we can extend geospatial analysis to more advanced applications, such as defining and managing geofences.

Geofencing

Geofencing is a geospatial technology with great promise that draws virtual boundaries around specific geographic areas. Using GPS, Wi-Fi, or cellular positioning, geofences initiate automatic responses when a device or object crosses a defined location. Moreover, geofencing is crucial in instances of location-based marketing, security monitoring, and fleet tracking.

Different geofence types exist, which are meant for specific applications. The most commonly used ones include circular geofences, forming a circle of a center point and a radius, and polygonal geofences, supporting more complex shapes by defining a number of boundary points that we will tackle in detail next.

Types of Geofences

1. Circular Geofence

Defined by:

  • A center point (lat/lng)
  • A radius in meters

public class CircularGeofence

{

    public GeoCoordinate Center { get; set; }

    public double RadiusMeters { get; set; }

    public bool IsInside(GeoCoordinate point)

    {

        var distance = GeoUtils.HaversineDistance(

            Center.Latitude, Center.Longitude,

            point.Latitude, point.Longitude

        );

        return distance <= RadiusMeters;

    }

}

2. Polygonal Geofence

A list of vertices (lat/lng pairs) forming a closed shape. The Point-in-Polygon Algorithm (Ray Casting) is used for detection.

public static bool IsPointInPolygon(List<GeoCoordinate> polygon, GeoCoordinate point)

{

    int n = polygon.Count;

    bool inside = false;

    for (int i = 0, j = n – 1; i < n; j = i++)

    {

        if (((polygon[i].Latitude > point.Latitude) != (polygon[j].Latitude > point.Latitude)) &&

            (point.Longitude < (polygon[j].Longitude – polygon[i].Longitude) *

             (point.Latitude – polygon[i].Latitude) /

             (polygon[j].Latitude – polygon[i].Latitude) + polygon[i].Longitude))

        {

            inside = !inside;

        }

    }

    return inside;

}

Geofencing not only helps in establishing virtual boundaries, but also serves as a foundation for more informative observations about mobility patterns. Through tracking when and where things are coming into and exiting a geofence, organizations and businesses can gather useful data about mobility trends, security breaches, and operational efficiency.

However, geofencing is just one aspect of geospatial analytics. It’s easy to define boundaries, but it’s another thing to quantify movement within them. Now, let’s explore how we can derive meaningful behavioral metrics from location tracking.

Analyzing Driving Behavior

Once you’ve tracked locations, you can derive behavioral metrics such as:

MetricDescription
SpeedDistance over time
Idle TimeLocation doesn’t change for a duration
Harsh BrakingSudden drop in speed
Route EfficiencyCompare actual vs. optimized route

public class GeoPoint

{

    public double Latitude { get; set; }

    public double Longitude { get; set; }

    public DateTime Timestamp { get; set; }

}

public bool IsStopped(List<GeoPoint> positions, int timeThresholdSeconds = 60)

{

    if (positions.Count < 2) return false;

    var first = positions.First();

    var last = positions.Last();

    double distance = GeoUtils.HaversineDistance(

        first.Latitude, first.Longitude,

        last.Latitude, last.Longitude

    );

    double timeElapsed = (last.Timestamp – first.Timestamp).TotalSeconds;

    return distance < 5 && timeElapsed > timeThresholdSeconds;

}

Analyzing driving behavior with geospatial data offers valuable insights into speed, idle time, harsh braking, and route efficiency. These metrics help improve safety, optimize operations, and enable data-driven decisions in fleet management or personal driving assessments. By integrating location tracking with behavior analysis, you can enhance productivity and reduce costs.

Real-World Applications

There is no denying that geospatial data plays a critical role across various industries, providing solutions that enhance efficiency, safety, and insights. Below are some key real-world applications where geospatial technology is applied to solve everyday challenges.

Use CaseDescription
Delivery TrackingLive route monitoring with alerts
Fleet MonitoringDetect unsafe driving or inefficiencies
Campus SecurityAlert if someone leaves or enters a zone
Wildlife TrackingMap and analyze movement patterns

Conclusion

To conclude, in a world where location is key, geospatial information offers potent power for industry innovation and operation improvement. From real-time positioning and geofencing to vehicle behavior analysis, the ability to measure, manage, and react to location-based insight creates a doorway to enhanced decision-making, efficiency, and safety. Whether it’s enhancing fleet management, safeguarding campuses, or monitoring wildlife, the applications of geospatial data are vast and impactful. As we continue to explore its potential, the integration of real-time data with advanced analytics will reshape how we interact with the world around us, making it smarter, safer, and more efficient.