Breathing New Life into Legacy Businesses with AI

Author: Jenn Cunningham, a Go-to-Market Lead, Strategic Alliances at PolyAI. Currently, she leads strategic alliances at PolyAI, where she manages key relationships with AWS and global consulting partners while collaborating closely with the PolyAI cofounders on product expansion and new market entry. Her unique journey from data science beginnings to implementation consulting gives her a front-row seat to how legacy businesses are leveraging AI to evolve and thrive.

***

When I had just finished my university degree, data science and data analytics were the hot topic, as businesses were ready to become more data-driven organizations. I was so excited to unlock new customer insights and inform business strategy, until my first project. After 8 weeks of cleansing data and crunching numbers for 12 hours a day, it was glaringly obvious that I was entirely too extroverted for a pure data science role. This led me to start a personal research project, exploring how businesses evaluate, implement, and adopt different types of process automation technology as the technology itself continued to evolve.That evolution led me to realize the other data and AI’s capabilities, primarily focusing on what they could do to the operations of businesses labeled legacy – not only for efficiency, but also for improving user service. These companies tend to be branded or perceived as slower to adapt, but they’re full of indisputable value waiting for the right ‘nudge.’

AI is providing that nudge because today, AI does more than automating boring work; it is changing how businesses perceive value. A century-old bank, a global manufacturer, and a regional insurer, these are just a few examples of businesses that are evolving their core AI technologies, improving their internal systems while retaining their rich history. 

This didn’t suddenly happen though, but there were many steps involved, each more groundbreaking than the last. So to truly narrate the current state AI had to evolve in, we need to wind the clocks back to a time when data wasn’t an inevitability but a luxury.

The First Wave: Data as a Project

Back in the infancy of data science within companies, their treatment of data resembled a whiteboard and marker experiment. Businesses seemed lost on what to do with data, and therefore assumed it required a project-like treatment with a start and end, or a PowerPoint presentation in mid — something that showcases interim findings. Along the way, gathering “let’s get a data scientist to look at this” comments became a norm embracing a carefree approach of one-single-to-multi-business domain change. 

In the time of my research, shifts were just beginning where organizations were changing from relying on gut feelings to data informed strategies but everything still felt labored. It was a common practice for clients not too familiar with the processes to take some approaches too literary. As such, I found a case where the clients used the tin can approach in their own manner, printing out .txt files like word documents containing customer interactions and using scissors to post these documents in a conference room where they would visually calculate key metrics, calculators and highlighters in hand. Data science in its untapped, unrefined glory was radical.

The purpose wasn’t to create sustainable systems. Instead, it was to respond to prompts such as “What’s our churn rate?” or “Was this campaign successful?” These questions, while important in their own right, were evasive at best. Each project felt like a fleeting victory without much future potential. There was no reusable framework, no collaboration across teams, and definitely no foresight for what data could evolve into.

However, this preliminary wave had significance as it allowed companies to recognize the boundaries of instinct-driven decision-making and the usefulness of evidence. Although the work was done in stages, it rarely resulted in foundational changes, and even when insights did materialize, they were not capable of driving widespread change.

The Second Wave: Building a Foundation for Ongoing Innovation

Gradually, a new understanding seemed to surface,  one that moved data from being a tactical resource to a strategic asset. In this second wave, companies sought answers to more advanced inquiries. How do we use data to enable proactive decision-making rather than only responsive actions? How can we incorporate insights into the operational fabric of our company? 

Rather than bringing on freelance data scientists on a contractual basis, the companies working with Data at the time transformed their approaches by building internal ecosystems of expertise composed of multidisciplinary teams and fostering a spirit of innovation. Thus, the focus shifted from immediate problem-solving to laying the foundational systems for comprehensive future infrastructure. 

Moreover, data started to shift from the back-office functions to the forefront. Marketing, sales, product, and customer service functions received access to real-time dashboards, AI tools, predictive analytics, and a host of other utilities. Therefore the democratization of data accelerated to bring the power of AI data insights to the decision makers who worked directly with customers and crafted user experiences.

What also became clear during this phase was that not all parts of the organization required the same level of AI maturity at the same time. Some teams were set for complete automation; others just required clean reporting which was perfectly fine. The goal was not standard adoption; rather, it was movement. The most advanced thinking companies understood that the pace of change didn’t have to happen everywhere all at once, it just needed a starting point and careful cultivation.

This was the turning point when data began evolving from a department to a capability; it could now single-handedly drive continuous enhancements instead of relying on project-based wins. That is when the flywheel of innovation had commenced spinning.

The Current Wave: Reimagining Processes with AI

Today, we are experiencing a third and possibly the most impactful wave of change. AI is no longer limited to enhancing analytics and operational efficiency; it now rethinks the very framework of how businesses are structured and run. What was previously regarded as an expenditure is now considered a divisive competitive advantage.  

Consider what PolyAI and Simplyhealth have done. Simplyhealth, a UK health insurer, partnered with PolyAI to implement voice AI within their customer service channels. However, this integration went beyond implementing basic chatbots. The AI was ‘empathetic AI’ since it could understand urgency, recognize vulnerable callers, and make judgment calls on whether patients should be passed to a human auxiliary.  

Everyone saw the difference. There was less waiting around, better call resolution, and most crucially, those that required care from a member of staff received it. Nonetheless, AI did not take the person out of the process; it elevated the person into the process, allowing them to experience empathy and enable humanity to work alongside effectiveness.

Such a focus on building technology around humans is rapidly becoming a signature of AI change in today’s world. You see it with retail AI, which customizes every touchpoint in the customer experience. It’s happening in manufacturing with costs associated with breakdowns being avoided through predictive maintenance. And in financial services, it’s experiencing massive shifts as AI technologies offer personalized financial consulting, fraud detection, and assistance to those missing traditional support.  

In all these examples, AI technologies support rather than replace people. Customer service representatives are equipped with richer context, which augments their responses, freelancers are liberated from doing repetitive work, and strategists get help concentrating on the right resources. Therefore, today’s best AI use cases focus on augmenting human experience instead of reducing the workforce.

Conclusion: 

Too often is the phrase “legacy business” misused to describe something as old-fashioned or boring. But in fact, these are businesses with long-standing customer relationships and histories, enabling them to evolve in meaningful ways.  

Modern AI solutions don’t simply replace manual labor as the advancement from spreadsheets and instinct-based decisions to fully integrated AI systems is more complex. Businesses progressively adopt modern practices all while having a vision and patience in terms of cultural branding. Plus, legacy businesses are contemporarily evolving and keeping up with the pace, and many are leading the race.  

AI today is changing everything and has now become a culture driving system. It impacts the very way we collaborate, deliver services, value customers, and so much more. Whether implementing new business strategies, redefining customer support, or optimizing computer science logistics, AI is proving to be a propellant for transformation focused on humans.  

Further, visionaries and team members who witnessed this automated evolution firsthand felt unity through action, fervently participating as data table-aligned pilots meshed with algorithms and numbers. Reminding us that change isn’t all technical; it’s human. It’s intricate, fulfilling, and simply put: essential.

To sum up, the future businesses are not the newest; rather, they are the oldest that choose to develop with a strong sense of intention behind it. In that development, legacy is not a hindrance, but rather, a powerful resource.

Demystifying Geospatial Data: Tracking, Geofencing, and Driving Patterns

Author: Muhammad Rizwan, a Senior Software Engineer specialising in microservices architecture, cloud-based applications, and geospatial data integration.

In a world where apps and platforms are becoming increasingly location-aware, geospatial data has become an essential tool across industries,ranging from delivery and logistics to personal security, urban planning, and autonomous vehicles. Whether tracking a package, building a virtual fence, or analyzing how a person drives, geospatial data enables us to know the “where” of all things.

This article explores the core concepts of geospatial data, including:

  • Real-time tracking
  • Distance measurement algorithms
  • Types of geofences
  • How to detect if a location is within a geofence
  • Driving behavior and pattern analysis

Understanding Geospatial Coordinates

To make sense of geospatial data, we first need to understand how locations are represented on Earth. Every point on the planet is identified using a coordinate system that provides a precise way to describe positions in space.

At the core of this system are two fundamental values:

  • Latitude (North-South position)
  • Longitude (East-West position)

Together, they form a GeoCoordinate:

public class GeoCoordinate

{

    public double Latitude { get; set; }

    public double Longitude { get; set; }

}

Understanding geospatial coordinates is essential for working with location-based data, but knowing a location alone is not always enough. In many applications, such as navigation, logistics, and geofencing, it is equally important to measure the distance between two points.

How to Measure Distance Between Two Locations

One of the most commonly used methods for calculating the straight-line (“as-the-crow-flies”) distance between two geographical points is the Haversine formula. The following mathematical approach accounts for the curvature of the Earth, making it ideal for accurate distance measurements.

Haversine Formula

Let:

  • φ1,λ1\varphi_1, \lambda_1 = latitude and longitude of point 1 (in radians)
  • φ2,λ2\varphi_2, \lambda_2 = latitude and longitude of point 2 (in radians)
  • Δφ=φ2−φ1\Delta \varphi = \varphi_2 – \varphi_1
  • Δλ=λ2−λ1\Delta \lambda = \lambda_2 – \lambda_1
  • RR = Earth’s radius (mean radius = 6,371,000 meters)

Then:

a=sin⁡2(Δφ2)+cos⁡(φ1)×cos⁡(φ2)×sin⁡2(Δλ2) a = \sin^2(\frac{\Delta \varphi}{2}) + \cos(\varphi_1) \times \cos(\varphi_2) \times \sin^2(\frac{\Delta \lambda}{2}) c=2×atan2⁡(a,1−a) c = 2 \times \operatorname{atan2}(\sqrt{a}, \sqrt{1 – a}) Distance=R×c \text{Distance} = R \times c

Implementation in C#

public static class GeoUtils

{

    private const double EarthRadiusMeters = 6371000;

    public static double DegreesToRadians(double degrees)

    {

        return degrees * (Math.PI / 180);

    }

    public static double HaversineDistance(double lat1, double lon1, double lat2, double lon2)

    {

        double dLat = DegreesToRadians(lat2 – lat1);

        double dLon = DegreesToRadians(lon2 – lon1);

        double radLat1 = DegreesToRadians(lat1);

        double radLat2 = DegreesToRadians(lat2);

        double a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) +

                   Math.Cos(radLat1) * Math.Cos(radLat2) *

                   Math.Sin(dLon / 2) * Math.Sin(dLon / 2);

        double c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 – a));

        return EarthRadiusMeters * c;

    }

}

Example:

double nyLat = 40.7128, nyLng = -74.0060;

double laLat = 34.0522, laLng = -118.2437;

double distance = GeoUtils.HaversineDistance(nyLat, nyLng, laLat, laLng);

Console.WriteLine($”Distance: {distance / 1000} km”);

Accurately measuring the distance between two points is a fundamental aspect of geospatial analysis, enabling uses ranging from navigation and logistics to geofencing and autonomous systems. To Elaborate, the Haversine formula provides a valid method of calculating straight-line distances by accounting for the curvature of the Earth and is therefore a standard method used in numerous industries. However, for more precise calculations for real-world usage such as road navigation or route planning based on terrain, other models like the Vincenty formula or graph-based routing algorithms may be more suitable.

By mastering and applying these techniques of distance calculation, we can increase the precision of location-based services and decision-making in spatial applications. Furthermore, with the ability to accurately measure distances between two points, we can extend geospatial analysis to more advanced applications, such as defining and managing geofences.

Geofencing

Geofencing is a geospatial technology with great promise that draws virtual boundaries around specific geographic areas. Using GPS, Wi-Fi, or cellular positioning, geofences initiate automatic responses when a device or object crosses a defined location. Moreover, geofencing is crucial in instances of location-based marketing, security monitoring, and fleet tracking.

Different geofence types exist, which are meant for specific applications. The most commonly used ones include circular geofences, forming a circle of a center point and a radius, and polygonal geofences, supporting more complex shapes by defining a number of boundary points that we will tackle in detail next.

Types of Geofences

1. Circular Geofence

Defined by:

  • A center point (lat/lng)
  • A radius in meters

public class CircularGeofence

{

    public GeoCoordinate Center { get; set; }

    public double RadiusMeters { get; set; }

    public bool IsInside(GeoCoordinate point)

    {

        var distance = GeoUtils.HaversineDistance(

            Center.Latitude, Center.Longitude,

            point.Latitude, point.Longitude

        );

        return distance <= RadiusMeters;

    }

}

2. Polygonal Geofence

A list of vertices (lat/lng pairs) forming a closed shape. The Point-in-Polygon Algorithm (Ray Casting) is used for detection.

public static bool IsPointInPolygon(List<GeoCoordinate> polygon, GeoCoordinate point)

{

    int n = polygon.Count;

    bool inside = false;

    for (int i = 0, j = n – 1; i < n; j = i++)

    {

        if (((polygon[i].Latitude > point.Latitude) != (polygon[j].Latitude > point.Latitude)) &&

            (point.Longitude < (polygon[j].Longitude – polygon[i].Longitude) *

             (point.Latitude – polygon[i].Latitude) /

             (polygon[j].Latitude – polygon[i].Latitude) + polygon[i].Longitude))

        {

            inside = !inside;

        }

    }

    return inside;

}

Geofencing not only helps in establishing virtual boundaries, but also serves as a foundation for more informative observations about mobility patterns. Through tracking when and where things are coming into and exiting a geofence, organizations and businesses can gather useful data about mobility trends, security breaches, and operational efficiency.

However, geofencing is just one aspect of geospatial analytics. It’s easy to define boundaries, but it’s another thing to quantify movement within them. Now, let’s explore how we can derive meaningful behavioral metrics from location tracking.

Analyzing Driving Behavior

Once you’ve tracked locations, you can derive behavioral metrics such as:

MetricDescription
SpeedDistance over time
Idle TimeLocation doesn’t change for a duration
Harsh BrakingSudden drop in speed
Route EfficiencyCompare actual vs. optimized route

public class GeoPoint

{

    public double Latitude { get; set; }

    public double Longitude { get; set; }

    public DateTime Timestamp { get; set; }

}

public bool IsStopped(List<GeoPoint> positions, int timeThresholdSeconds = 60)

{

    if (positions.Count < 2) return false;

    var first = positions.First();

    var last = positions.Last();

    double distance = GeoUtils.HaversineDistance(

        first.Latitude, first.Longitude,

        last.Latitude, last.Longitude

    );

    double timeElapsed = (last.Timestamp – first.Timestamp).TotalSeconds;

    return distance < 5 && timeElapsed > timeThresholdSeconds;

}

Analyzing driving behavior with geospatial data offers valuable insights into speed, idle time, harsh braking, and route efficiency. These metrics help improve safety, optimize operations, and enable data-driven decisions in fleet management or personal driving assessments. By integrating location tracking with behavior analysis, you can enhance productivity and reduce costs.

Real-World Applications

There is no denying that geospatial data plays a critical role across various industries, providing solutions that enhance efficiency, safety, and insights. Below are some key real-world applications where geospatial technology is applied to solve everyday challenges.

Use CaseDescription
Delivery TrackingLive route monitoring with alerts
Fleet MonitoringDetect unsafe driving or inefficiencies
Campus SecurityAlert if someone leaves or enters a zone
Wildlife TrackingMap and analyze movement patterns

Conclusion

To conclude, in a world where location is key, geospatial information offers potent power for industry innovation and operation improvement. From real-time positioning and geofencing to vehicle behavior analysis, the ability to measure, manage, and react to location-based insight creates a doorway to enhanced decision-making, efficiency, and safety. Whether it’s enhancing fleet management, safeguarding campuses, or monitoring wildlife, the applications of geospatial data are vast and impactful. As we continue to explore its potential, the integration of real-time data with advanced analytics will reshape how we interact with the world around us, making it smarter, safer, and more efficient.

Testing Camera and Gallery in Android: Emulating Image Loading in UI Tests

Author: Dmitrii Nikitin, a Android Team Leader at Quadcode with over 7 years of experience in developing scalable mobile solutions and leading Android development teams.

A lot of Android apps request the user to upload images. Social media apps, document scanners, cloud storage providers, you name it. These scenarios are left without any automated tests because developers would rather not try to open the camera or the gallery.

But the fact is, such difficulties can be surpassed. In this article, I will discuss simulating the camera and gallery behavior in emulators, injecting specific images for testing purposes, intent mocking, and how to know when such methods are not enough for thorough testing.

Emulating Camera Images in Android Emulator

The Android emulator is able to display arbitrary images as camera sources, which is extremely convenient if you’re writing flows like “take a picture” or “scan a document” and you’d like the camera to display the same image under all circumstances.

Setting Up Custom Camera Images

The emulator uses a scene configuration file located at:

$ANDROID_HOME/emulator/resources/Toren1BD.posters

You can add a poster block to this file with these attributes:

poster custom
size 1.45 1.45
position 0.05 -0.15 -1.4
rotation -13 0 0
default custom-poster.jpg

This setting determines:

  • default: The path to the image used as the camera feed
  • size, position, rotation: Image size, position, and rotation angle parameters in the scene

Automating Image Setup

You can automatize this process through a shell command:

sed -i ’1s,^,poster custom\n size 1.45 1.45\n position 0.05 -0.15 -1.4\n rotation -13 0 0\n default custom-poster.jpg\n,’ $ANDROID_HOME/emulator/resources/Toren1BD.posters

Here is a Kotlin script that copies the required file into the correct position:

class SetupCameraImageScenario(private val imageFileName: String): BaseScenario<ScenarioData>() {
    override val steps: TestContext<ScenarioData>.() -> Unit = {
        val androidHome = System.getenv("ANDROID_HOME") ?: error("ANDROID_HOME is required")
        val posterPath = "$androidHome/emulator/resources/custom-poster.jpg"
        val localImagePath = "src/androidTest/resources/$imageFileName"
        val cmd = "cp $localImagePath $posterPath"
        Runtime.getRuntime().exec(cmd).waitFor()
    }
}

Injecting Images into Gallery

(Intent.ACTION_PICK) is less of a pain than camera testing, but with one crucial gotcha: copying to internal storage alone is not enough. If you simply copy an image file, it will not appear in the system picker.

An image must be written to the correct folder to be pickable, and must also be registered in MediaStore.

Proper Gallery Image Setup

The process involves:

  1. Declaring the name, type, and path of the image (e.g., Pictures/Test)
  2. Obtaining a URI from MediaStore and storing the image content into it

You can implement it as follows:

class SetupGalleryImageScenario(private val imageFileName: String) : BaseScenario<Unit>() {
    override val steps: TestContext<Unit>.() -> Unit = {
        step("Adding image to MediaStore") {
            val context = InstrumentationRegistry.getInstrumentation().targetContext
            val resolver = context.contentResolver
            
            val values = ContentValues().apply {
                put(MediaStore.Images.Media.DISPLAY_NAME, imageFileName)
                put(MediaStore.Images.Media.MIME_TYPE, "image/jpeg")
                put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/Test")
            }
            
            val uri = resolver.insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values)
            checkNotNull(uri) { "Failed to insert image into MediaStore" }
            
            resolver.openOutputStream(uri)?.use { output ->
                val assetStream = context.assets.open(imageFileName)
                assetStream.copyTo(output)
            }
        }
    }
}

Now that test is opening the gallery, the required image will be visible among the options.

Selecting Images from Gallery

After you’ve placed the image in the MediaStore, you need to call Intent.ACTION_PICK and select the appropriate file in the UI. That’s where UiAutomator is useful, as the picker UI varies across versions and Android:

  • Photo Picker (Android 13+)
  • System file picker or gallery on older Android versions

In order to support both, create a wrapper:

class ChooseImageScenario<ScenarioData>(
    onOpenFilePicker: () -> Unit,
) : BaseScenario<ScenarioData>() {
    override val steps: TestContext<ScenarioData>.() -> Unit = {
        if (PickVisualMedia.isPhotoPickerAvailable(appContext)) {
            scenario(ChooseImageInPhotoPickerScenario(onOpenFilePicker))
        } else {
            scenario(ChooseImageInFilesScenario(onOpenFilePicker))
        }
    }
}

Both approaches start in the same way by calling onOpenFilePicker() (typcally a button click in the UI), then:

  • ChooseImageInPhotoPickerScenario: Locates and taps the image within Photo Picker
  • ChooseImageInFilesScenario: Opens the system file manager (for example, locating the file name via UiSelector().text(“test_image.jpg”) and opening it)

This approach covers both kinds of scenarios, making picking an image general and robust.

Intent Mocking: When Real Camera or Gallery Isn’t Necessary

Most of the tests will not require opening actual camera or gallery apps. To test app response after an image is received, you can mock the response of the external app using Espresso Intents or Kaspresso.

For example, when you’re testing that a user “took a picture” and subsequently the UI displays the correct picture or triggers a button, you don’t need to open the camera. You can simulate the result to get this accomplished:

val resultIntent = Intent().apply {
    putExtra("some_result_key", "mocked_value")
}
Intents.intending(IntentMatchers.hasAction(MediaStore.ACTION_IMAGE_CAPTURE))
    .respondWith(Instrumentation.ActivityResult(Activity.RESULT_OK, resultIntent))

When startActivityForResult(.) is invoked by the app to launch the camera, the test gets an immediate precooked result, such as the image being captured and returned. The camera is not launched, so the test is fast and predictable.

This strategy proves useful when:

  • You are less concerned about selection or capture process but more concerned about processing outcomes
  • You have to make test execution faster
  • You need to avoid dependencies on different versions of camera/galleries on devices

When Mocking Isn’t Sufficient

Sometimes it’s also necessary to stress-test not just that the app returns results correctly, but that it acts correctly in actual usage, such as when the user isn’t using it and the system boots it out of RAM. An example of that is DNKA (Death Not Killed by Android).

Understanding DNKA

DNKA happens when Android quietly unloads your app because of memory pressure, loss of focus, or dev settings explicit unloading. onSaveInstanceState() may be invoked but onDestroy() may not. Users come back in and expect the app to “restore” itself into the same state. Ensure that you:

  • Check if ViewModel and State are properly rebuilt
  • Check that the screen crashes if no saved state exists
  • Check that SavedStateHandle is as expected
  • If user interaction (photo selection, form input, etc.) is preserved

Enabling DNKA

The simplest way of enabling behavior in which Android terminates activities forcefully is through developer system settings:

Developer Options → Always Finish Activities

You can achieve with ADB:

adb shell settings put global always_finish_activities 1
# 1 to enable, 0 to disable

That background aside, any external activity launch (camera or gallery) will result in your Activity destroyed. When you go back to the app, it’ll need to recreate state from scratch, precisely what we’d like to test.

Why Intent Mocks Don’t Help Here

When using mocked intents:

Intents.intending(IntentMatchers.hasAction(MediaStore.ACTION_IMAGE_CAPTURE))
    .respondWith(Instrumentation.ActivityResult(Activity.RESULT_OK, resultIntent))

The external application is never started, so Android won’t even unload your Activity. The mock is instant responsive, so it’s not possible to test DNKA scenarios.

When Real Intents Are Necessary

To verify DNKA behavior, Android actually needs to unload the Activity. This means actually performing an actual external Intents: take a picture, select from the gallery, or third-party apps. Only this is capable of simulating when users open another application and your application “dies” in the background.

Conclusion

Automated testing sometimes has added the requirement to “see” images, and this issue is not as sneaky as it may seem. Testing photo loading from camera or gallery choice actually doesn’t involve real devices or manual testing. Emulators let you pre-place required images and simulate them as though users just selected or took files.

While intent mocking can be sufficient in some cases, for others complete “real” experience is necessary in order to guarantee recovery from activity cancellation. The trick is choosing the right method for your specific test scenario. 

Understanding these methods enables you to gain complete testing of image-related functionality so that your app handles well in happy path and edge case scenarios like system-induced process death. With proper setup, you can create robust, stable tests for the full gamut of user activity across camera and gallery functionality.

Whether you are writing tests for profile picture uploads, document scanning, or something else that involves images, these practices provide the foundation for good automated testing without jeopardizing coverage or reliability.

Learning SwiftUI as a Designer. A guide

Author: Oleksandr Shatov, Lead Product Designer at Meta

***

Recently, I have received many messages from fellow designers about transitioning from static design tools to creating a real iOS app using SwiftUI. In this article, I will describe my journey, sharing my favourite resources, practical tips, and the best tools for designers who want to master the framework and release their apps. 

Why SwiftUI is a Game-Changer for Designers 

SwiftUI is Apple’s framework for building user interfaces in iOS, iPadOS, macOS, watchOS, and tvOS. 

SwiftUI’s built-in modifiers for styling, animations, and gestures allow designers to create complex interfaces with minimal code. Specialists can also use native features like haptics, cameras, and sensors to make designs authentic. 

SwiftUI helps to ship real apps. The gap between design and development has shrunk, so designers can now turn their ideas into products accessible to millions of users. 

Getting Started: SwiftUI Basics

If you are new to SwiftUI, One of the best sources I have found is a YouTube course where every lesson begins from a blank page with detailed explanations. It covers everything from basics and modifiers to more advanced concepts

Some of the topics to focus on: 

  • Basics: Creating and styling basic UI elements like Text, Image, Buttons, and a To-Do list
  • Tools: Mastering HStack, VStack, and ZStack for arranging the interface
  • Navigation: Moving between screens and managing app flow
  • Case Studies: Rebuilding Spotify, Bumble, and Netflix with SwiftUI

After learning the basics, you can move to building real apps. 

How to build real apps 

Another YouTube channel I recommend specialises in building apps like Tinder and Instagram from scratch. These videos explain the entire process – from setting up the project and organising your code to implementing other features (authentication, data storage, and animation). 

My main takeaway from the tutorials is that building a simple app comes first.

Remember to take every real-world project as a learning opportunity. Creating code, organising files, and implementing features helps you acquire the developer’s mindset and understand how designs work and scale.

Each app you build brings you closer to mastering SwiftUI. With time and practice, you will become more confident in tackling complex projects and implementing your ideas into fully functional apps. 

To be inspired

Learning a new skill can be overwhelming. Therefore, inspiration and motivation are necessary. I highly recommend reading articles by Paul Stamatiou, especially his piece on building apps as a designer. His experience proves that anything is possible with persistence and the right tools. 

AI to be your code partner 

AI tools were also beneficial for my learning process. My favourite is Cursor, an advanced code editor integrating Anthropic’s Claude Sonnet. It gives you full access to Xcode project files and helps you instantly debug, refactor, and generate code. 

The reasons Cursor stands out: 

  • Other AI tools, such as the new GPT with Canvas, cannot access the file structure. Cursor understands the entire project. 
  • There is no native AI inside Xcode yet. However, Cursor’s integration is smooth

Integrating AI into your workflow lets you focus more on design and user experience – the creative side of the work. Instead of you, AI will handle the repetitive or complex coding tasks. 

Challenges and the future

When learning SwiftUI, you will encounter bugs, error messages, and frustration. Therefore, I would like to share some tips on how to overcome the issues. 

  • Step by step: The aforementioned YouTube videos are created for different skill levels – basic, intermediate, and advanced. Follow these levels accordingly. 
  • Establish a consistent learning schedule: Learning SwiftUI requires focus and regular practice to become proficient. I suggest frequent sessions rather than sporadic intensive study periods, as they are more effective.  

The line between design and development is blurring, especially with the emergence of AI; this process will continue. You can now create a functional app using the basics and tips I have shared in this article.

At first, you might feel overwhelmed by the complexity of real apps, especially regarding user authentication, data management, or animation. However, you can build confidence and competence by breaking down large tasks into smaller steps and applying what you have learned. 

Mastering SwiftUI might be complicated, but it is still possible. 

The Designer’s Toolkit for SwiftUI in 2024 

Here is the final list of the tools that have helped me achieve success as a designer learning SwiftUI: 

If you have your favourite resources for learning SwiftUI, please share them.

Winning in a Privacy-First Era: First-Party Data Strategies and the Role of the CDP

As privacy rules tighten, relying on third-party data is becoming more risky. Most customer-facing brands will soon depend almost entirely on their own first-party information. A Customer Data Platform, or CDP, is poised to be the backbone of that new strategy.

For several years a growing wave of laws and tech changes has limited how companies follow and target people with outside data-that is, data collected by firms that never interact directly with the end user.

  • Regulations such as the EU’s General Data Protection Regulation or GPDR and California’s Consumer Privacy Act, CCPA, have already raised global standards for how data is gathered and used. More regions are sure to roll out similar rules in the near future.
  • Smartphone makers are stepping in, too. Last year Apple’s decision to sunset the IDFA, or Identifier for Advertisers, made it much harder for brands to quietly track users across apps and sites and serve ads as they once did.
  • The biggest jolt to online ads came from Google back in 2019 when the company said it would dump third-party cookies. To give brands and publishers time to adjust, that change was pushed ahead to 2023. Now, Google is pitching Topics, the replacement for its earlier FLOC plan, as the main tool for a cookieless future.

Consumers are speaking up more loudly about their privacy these days. A March 2022 survey by the Consumer Technology Association showed that roughly two-thirds of U.S. adults worry a lot about how internet gadgets use their personal information.

Because of that pushback, relying on third-party data to guide sales and marketing has become risky business. That change hits the 88% of marketers who traditionally leaned on outside data to build a fuller picture of every shopper. Moving forward, brands will need to gather insights straight from the people they actually interact with. You can already guess what that means for anyone in sales or marketing.

First, we have to make every effort-whether through helpful newsletters, free trials, downloadable guides, or quality blog posts-to encourage customers to share their contact info. Getting that permission is just the starting line for a solid first-party data game plan.

Not starting from scratch

Large companies almost always have piles of first-party data just waiting to be put to good use. The trouble is that when this data sits in separate programs and departments, it fights against the seamless, on-line experience everyone keeps talking about. In fact, more than half of marketers (54%) say poor quality and missing data is the single biggest roadblock to running campaigns that really feel data driven. And as newer platforms like TikTok and connected TV become standard parts of the mix, that problem is unlikely to get better on its own.   

Think of first-party customer data as a stack of loose tiles all over the floor of the business. If you want a tidy picture, you need a tool that picks those pieces up and lays them out in a clear pattern. That’s exactly the role a Customer Data Platform (CDP) was built to play.

Unlike the familiar Data Management Platforms that mainly focus on outside data, a Customer Data Platform pulls in every piece of information you have-even Personally Identifiable Information or PII. It collects both clearly named and pseudonymous data from every channel and arranges everything in one clean format. While sorting, the system filters out weird data points and mistakes, raising the overall trustworthiness of what you see. Strong usage rules then help make sure the data is handled openly and fairly, giving customers more power over their own PII.

Now that customer data platforms are a bit older, many of them use Artificial Intelligence to fill in missing pieces of a customer’s story. Over time, they will even craft digital twins-a kind of educated guess profile-for shoppers whose past behavior you can’t see, borrowing clues from people who look similar.

With this tech, your team can gather clear, privacy-friendly profiles without spending days manually stitching emails, website clicks, and in-store visits together. The platform can also suggest the best moment to gently ask a buyer for new information. Just as important, the CDP should work in real time, so every decision sits on the freshest data, not yesterday’s news. Taken together, a real-time system gives brands one united 360-degree picture of each shopper, making truly personal, seamless experiences possible across every channel.

The Best Survivors are the Best Adapters

A Real-Time Customer Data Platform lets you pull together first-party info from websites, apps, and other channels and show all that data in one clear place. By doing so, you can replace what third-party cookies once did and still learn what each person prefers at this very moment.

The clearer view lets you send the right message at the right time-today, tonight, or next week-rather than hoping you guessed correctly in advance.

When your outreach feels personal and accurate, customers notice, trust grows, and long-term relationships form. That kind of agility keeps your business moving forward even in a cookieless future.

Building serverless pipeline using AWS CDK and Lambda in Python

Creating a serverless pipeline using AWS CDK alongside AWS Lambda in Python allows for event-driven applications which can easily be scaled without worrying about the underlying infrastructure. This article describes the process of creating and setting up a serverless pipeline step by step in AWS CDK and Python Lambda with Visual Studio Code (VS Code) as the IDE.

Completing this guide enables the deployment of a fully working AWS Lambda function with AWS CDK.

Understanding Serverless Architecture and Its Benefits

A serverless architecture is a cloud computing paradigm where the developers need to write the code as functions and these functions get executed upon receiving an event or request. These functions will execute without any server provisioning or management. Execution and resource allocation are automatically managed by the cloud provider – in this instance, AWS.

Key Characteristics of Serverless Architecture:

  1. Event-Driven: Functions are triggered by events such as S3 uploads, API calls, or other AWS service actions.
  2. Automatic Scaling: The platform automatically scales based on workload, handling high traffic without requiring manual intervention.
  3. Cost Efficiency: Users pay only for the compute time used by the functions, making it cost-effective, especially for workloads with varying traffic.

Benefits:

Serverless architecture comes with numerous advantages that are beneficial for modern applications in the cloud. One of the most notable benefits of serverless architecture is improved operational efficiency due to the lack of server configuration and maintenance. Developers are free to focus on building and writing code instead of worrying about managing infrastructure. 

Serverless architecture has also enabled better workload management because automatic scaling allows serverless platforms to adjust to changing workloads without human interaction, making traffic spikes effortless. This kind of adaptability maintains high performance and efficiency while minimizing costs and resource waste.

In addition, serverless architecture has proven to be financially efficient, allowing users to pay solely for the computing resources they utilize, as opposed to pre-purchased server capacity. This flexibility is advantageous for workloads with unpredictable or fluctuating demand. Finally, the ease of use provided by serverless architecture leads to an accelerated market launch because developers can rapidly build, test, and deploy applications without the tedious task of configuring infrastructure, leading to faster development cycles.

Understanding ETL Pipelines and Their Benefits

ETL (Extract, Transform, Load) pipelines automate the movement and transformation of data between systems. In the context of serverless, AWS services like Lambda and S3 work together to build scalable, event-driven data pipelines.

Key Benefits of ETL Pipelines:

  1. Data Integration: Combines disparate data sources into a unified system.
  2. Scalability: Services like AWS Glue and S3 scale automatically to handle large datasets.
  3. Automation: Use AWS Step Functions or Python scripts to orchestrate tasks with minimal manual intervention.
  4. Cost Efficiency: Pay-as-you-go pricing models for services like Glue, Lambda, and S3 optimize costs.

Tech Stack Used in the Project

For this serverless ETL pipeline, Python is the programming language of choice while Visual Studio Code serves as the IDE. The architecture is built around AWS services such as AWS CDK for resource definition and deployment, Amazon S3 as the storage service, and AWS Lambda for running serverless functions. All these in combination build a strong robust and scalable serverless data pipeline.

The versatility and simplicity associated with Python, as well as its extensive library collection, make it an ideal language for Lambda functions and serverless applications. With AWS’s CDK (Cloud Development Kit), the deployment of cloud resources is made easier because infrastructure can be defined programmatically in Python and many other languages. AWS Lambda is a serverless compute service which scales automatically and charges only when functions are executed, making it very cost-effective for event-driven workloads. Amazon S3 is a highly scalable object storage service that features prominently in serverless pipelines as a staging area for raw data and the final store for the processed results. These components create the building blocks of a cost-effective and scalable serverless data pipeline.

  • Language: Python
  • IDE: Visual Studio Code
  • AWS Services:
    • AWS CDK: Infrastructure as Code (IaC) tool to define and deploy resources.
    • Amazon S3: Object storage for raw and processed data.
    • AWS Lambda: Serverless compute service to transform data.

Brief Description of Tools and Technologies:

  1. Python: A versatile programming language favored for its simplicity and vast ecosystem of libraries, making it ideal for Lambda functions and serverless applications.
  2. AWS CDK (Cloud Development Kit): An open-source framework that allows you to define AWS infrastructure in code using languages like Python. It simplifies the deployment of cloud resources.
  3. AWS Lambda: A serverless compute service that runs code in response to events. Lambda automatically scales and charges you only for the execution time of your function.
  4. Amazon S3: A scalable object storage service for storing and retrieving large amounts of data. In serverless pipelines, it acts as both a staging and final storage location for processed data.

Building the Serverless ETL Pipeline – Step by Step

In this tutorial, we’ll guide you through setting up a serverless pipeline using AWS CDK and AWS Lambda in Python. We’ll also use Amazon S3 to store data.

Step 1: Prerequisites

To get started, ensure you have the following installed on your local machine:

  • Node.js (v18 or later) → Download Here
  • AWS CLI (Latest version) → Install Guide
  • Python 3.x (v3.9 or later) → Install Here
  • AWS CDK (Latest version) → Install via npm.
  • Visual Studio Code Download Here
  • AWS Toolkit for VS Code (Optional, but recommended for easy interaction with AWS)
Configure AWS CLI

To configure AWS CLI, open a terminal and run:

A screenshot of a computer

AI-generated content may be incorrect.

aws configure

A screenshot of a computer

Enter your AWS Access Key, Secret Access Key, default region, and output format when prompted.

Install AWS CDK
A screenshot of a computer

AI-generated content may be incorrect.

To install AWS CDK globally, run:

npm install -g aws-cdk

Verify the installation by checking the version:

cdk --version

Login to AWS from Visual Studio Code

Click on the AWS logo on the left side, it will ask for credentials for the first time

A screenshot of a computer

AI-generated content may be incorrect.

For the profile name use the Iam user name

A screenshot of a computer

After signing in the IDE will appear as below.

A screenshot of a computer

Step 2: Create a New AWS CDK Project

Open Visual Studio Code and create a new project directory:

mkdir serverless_pipeline_project

cd serverless_pipeline_project

A screenshot of a computer
A computer screen shot of a computer screen
A screenshot of a computer

Initialize the AWS CDK project with Python:

cdk init app --language python
This sets up a Python-based AWS CDK project with the necessary files.

Step 3: Set Up a Virtual Environment

Create and activate a virtual environment to manage your project’s dependencies:

python3 -m venv .venv

source .venv/bin/activate  # For macOS/Linux

# OR

.venv\Scripts\activate  # For Windows

python3 -m venv .venv

source .venv/bin/activate  # For macOS/Linux

# OR

.venv\Scripts\activate  # For Windows

Install the project dependencies:

pip install -r requirements.txt

Step 4: Define the Lambda Function

Create a directory for the Lambda function:

mkdir lambda

Write your Lambda function in lambda/handler.py:

import boto3

import os

s3 = boto3.client('s3')

bucket_name = os.environ['BUCKET_NAME']

def handler(event, context):

    # Example: Upload processed data to S3

    s3.put_object(Bucket=bucket_name, Key='output/data.json', Body='{"result": "ETL complete"}')

    return {"statusCode": 200, "body": "Data successfully written to S3"}

Step 5: Define AWS Resources in AWS CDK

In the serverless_pipeline/serverless_pipeline_stack.py, define the Lambda function and the S3 bucket for data storage:

from aws_cdk import (

    Stack,

    aws_lambda as _lambda,

    aws_s3 as s3

)

from constructs import Construct

class ServerlessPipelineProjectStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:

        super().__init__(scope, construct_id, **kwargs)

        # Create an S3 bucket

        bucket = s3.Bucket(self, "ServerlessPipelineProjectS3Bucket")

        # Create a Lambda function

        lambda_function = _lambda.Function(

            self, 

            "ServerlessPipelineProjectLambdaFunction",

            runtime=_lambda.Runtime.PYTHON_3_9,

            handler="handler.handler",

            code=_lambda.Code.from_asset("lambda"),

            environment={

                "BUCKET_NAME": bucket.bucket_name

            }

        )

        # Grant Lambda permissions to read/write to the S3 bucket

        bucket.grant_read_write(lambda_function)

Step 6: Bootstrap and Deploy the AWS CDK Stack

Before deploying the stack, bootstrap your AWS environment:

cdk bootstrap

Then, synthesize and deploy the CDK stack:

cdk synth

cdk deploy

A screen shot of a computer code

You’ll see a message confirming the deployment.

Step 7: Test the Lambda Function

Once deployed, test the Lambda function using the AWS CLI:

aws lambda invoke --function-name ServerlessPipelineProjectLambdaFunction output.txt

You should see a response like:

{

    "StatusCode": 200,

    "ExecutedVersion": "$LATEST"

}

Check the output.txt file; it will contain:

{"statusCode": 200, "body": "Data successfully written to S3"}

A folder called output will be created in S3 with a file data.json inside it, containing:

{"result": "ETL complete"}

Step 8: Clean Up Resources (Optional)

To delete all deployed resources and avoid AWS charges, run:

cdk destroy

Summary of What We Built

For this project, we configured AWS CDK within a Python environment. This was done to create and manage the infrastructure that is needed for a serverless ETL pipeline. The processing unit of the pipeline is an AWS Lambda serverless function which we developed for data processing. We also added Amazon S3 to use as a scalable and durable storage solution for raw and processed data. We deployed the required AWS resources using AWS CDK which automated the deployment processes. Finally, we confirmed that the entire setup was as expected by invoking the Lambda function and assured the data flowed properly through the pipeline.

Next Steps

In the future, I see multiple opportunities to improve and extend this serverless pipeline. An improvement that could be added is the use of AWS Glue for data transformation since it can automate and scale complicated ETL processes. Also, integrating Amazon Athena will enable serverless querying of the processed data which will allow for efficient analytics and reporting. Furthermore, we could use Amazon QuickSight for data visualization that can enhance the insights obtained from the data, allowing users to interact with the data presented on dashboards. These steps will build upon fundamentally what we have already built and will create a more comprehensive and sophisticated data pipeline.

By following this tutorial, you’ve laid the foundation for building a scalable, event-driven serverless pipeline in AWS using Python. Now, you can further expand the architecture based on your needs and integrate more services to automate and scale your workflows.

Author: Ashis Chowdhury, a Lead Software Engineer at Mastercard with over 22 years of experience designing and deploying data-driven IT solutions for top-tier firms including Tata, Accenture, Deloitte, Barclays Capital, Bupa, Cognizant, and Mastercard.