Evolution of Data Science: New Age Skills for the Modern End-to-End Data Scientist

In the 1980s, Wall Street discovered that physicists were great at solving complex financial problems that made their firms a bucket load of money. Becoming a "quant" meant joining the hottest profession of the time.
Twenty years later, in the late 2000s, as the world was on the cusp of a big data revolution, a similar trend emerged as businesses sought a new breed of professionals capable of sifting through all that juicy data for lucrative insights.
This emerging field became known as data science.
In 2018, while completing my PhD in modelling frontier cancer treatments, I transitioned from academia to industry and began working for one of the largest banks in Australia. (See my new analytics YouTube channel for more.)
I was joined by seven other STEM doctorate candidates from top universities across the country, each specialising in diverse fields such as diabetes research and machine learning to neuroscience and rocket engineering.
Despite being scattered across every corner of the company, we all eventually ended up in the bank's big data division – a twist we still joke about to this day.

Like physicists and quants, was there something about STEM backgrounds that naturally lent itself to Data Science?
As Thomas H. Davenport and DJ Patil explained in their famous 2012 article Data Scientist: The Sexiest Job of the 21st Century, this made a lot of sense:
"…it's becoming clear why the word "scientist" fits this emerging role [of the ‘Data Scientist', who is expected to] design equipment, gather data, conduct multiple experiments and communicate their results.
Thus, companies looking for people who can work with complex data have had good luck recruiting among those with educational and work backgrounds in the physical or social sciences.
Some of the best and brightest data scientists are PhDs in esoteric fields like ecology and systems biology. [Many] data scientists working in business today were formally trained in computer science, math, or economics. They can emerge from any field that has a strong data and computational focus."
Twelve years on, the type of people who make great data scientists largely hasn't changed, although I have seen fantastic data scientists from all walks of life.
What has dramatically changed, however, are business expectations, the technology landscape and the expanding range of skills a data scientist is expected to have.
I struggled with imposter syndrome during my PhD. Six years after moving into industry as an engineer and data scientist, I continue to struggle with imposter syndrome. Keeping up with everything can feel overwhelming.
In this article, I'll dive into three juicy questions.
- How has data science evolved into where we are today?
- What are the essential skills for a modern data scientist?
- Where is analytics heading and what does it mean for your career?
Evolution of Data Science
In 2006, the Harvard Business Review published an article titled "Competing on Analytics".
This influential piece by Thomas Davenport and Jeanne Harris sparked widespread discussion on leveraging analytics for competitive business advantage.
For descriptive analytics and reporting, firms invested in BI tools from the likes of SAS, SAP, IBM, Microsoft, Tableau, Oracle, MicroStrategy and QlikView, empowering data analysts to uncover trends in historical data and make data-informed business decisions.

But the future lay in machine learning and predictive analytics.
Firms would soon begin investing in big data tools, along with the talent who would be able to milk that wealth of data for insights. A new role, the Data Scientist (DS), was tasked with helping lead the charge toward transforming firms into data-driven organisations.
The ultimate goal was to base every decision on data and have enough data and insights to craft products and services tailored to individual customers – known as hyperpersonalisation.
Birth of the "Sexiest Profession"
The term "data science" was coined by DJ Patil and Jeffery Hammerbacher in 2008, who headed data and analytics at LinkedIn and Facebook, respectively.
Patil would go on to become the first US Chief Data Scientist in 2015, while Hammerbacher would soon co-found big data giant Cloudera.
In 2009, Hal Varian, Google‘s Chief Economist, gave a prescient prediction:
"I keep saying the sexy job in the next ten years will be statisticians. People think I'm joking, but who would've guessed that computer engineers would've been the sexy job of the 1990s?"
In 2012, Patil and Davenport declared that the unique combination of scarcity and highly desirable yet difficult-to-master skills would make data scientists the sexiest professionals in the 21st century:
"If ‘sexy' means having rare qualities that are much in demand, data scientists are already there. They are difficult and expensive to hire and, given the very competitive market for their services, difficult to retain. There simply aren't a lot of people with their combination of scientific background and computational and analytical skills."
Facebook Puts Data Science on the Map
That same year, Facebook formed its own data science team and quickly demonstrated the profound value of data science when data, talent and organisational autonomy aligned.
The 12-strong team of researchers at Menlo Park, led by Cameron Marlow, a 35-year-old MIT PhD, mined the network's rapidly growing wealth of social data for insights that would pioneer many of the social media platform's new features for years to come.
The team identified patterns that enabled Facebook to suggest friends users hadn't yet ‘friended' and made hundreds of tweaks to the website through A/B testing, a now bread-and-butter tool in many data scientists' toolkits.
Remarkably, Marlow's researchers made groundbreaking discoveries in human social behaviour.
You may have heard of the concept of six degrees of separation – the idea that any person on Earth is just six steps away from any other person. Facebook discovered that, on average, just four degrees of separation are needed, meaning "a friend of your friend knows a friend of their friend," as explained in their technical paper.
The team even developed a way to calculate a country's "gross national happiness" by employing sentiment analysis on its citizens' posts, analysing occurrences of words and phrases that signalled positive and negative emotions.
Hadoop: Democratising Big Data Analytics to the World
It certainly helped that Facebook had a very short path from idea to experiment on hundreds of millions of people, a luxury not available to researchers in academia or data scientists at almost any other company in the world – even today.
However, Marlow's team also faced significant hurdles in implementing their experiments. The technology to crunch a mind-bogglingly large amount of data to surface insights at a level as granular as each individual was in its infancy and it was up to big data and data science pioneers in Silicon Valley's big tech sector to fashion the frameworks and tools to do it.
Inspired by early work at Google, Yahoo played an instrumental role in developing Apache Hadoop, an open-source big data stack which enabled seemingly impossible computing tasks on huge datasets by distributing all that data across many cheap, or commodity, machines. Over time, the Hadoop ecosystem of tools was refined by engineers from Amazon, Facebook, eBay, Google, LinkedIn, Microsoft, Twitter and Walmart.
When I started at my bank in 2018, we had just begun exploring big data technologies in earnest, building a data lake on a Cloudera-provided Hadoop stack. Cloudera, founded in 2008 by folks from Facebook, Google, Oracle and Yahoo, packaged Hadoop into versions designed for easy enterprise adoption.
Our bourgeoning data lake was soon being filled by legions of data engineering scrum teams, who laid down the data plumbing, pulling data from across the bank and pumping it into the lake as Hive tables.
Hive, a technology developed by Facebook engineers in 2010, empowered data analysts to query big data tables using a language they knew well: SQL. And Spark, first released in 2014, enabled our data scientists to swim around in our data lake using a language they knew well: Python.
Junior Data Scientists Flood the Market
By 2015, Hadoop technologies like Hive and Spark finally made big data analytics financially and technologically feasible for firms worldwide.
But other hurdles remained:
- Is your firm's data ready for data science?
- Does your firm know how to manage its data science talent effectively?
- Does your firm possess the necessary data science talent?
The first two points set the groundwork for successful data science.
Without good data and a solid data and analytics strategy, it's difficult for leaders to unlock the full potential of their data scientists.

Still, that didn't stop firms from plucking data science talent off the market and figuring out what to do with them later, partly because they were such a scarce human commodity.
And they were scarce because formal programs to train data scientists was limited. This is often the case with new technologies, where adoption and demand initially far outstrip the capacity to train new workers to meet that demand.
In the 1980s, universities eventually began churning out financial engineering majors to meet Wall Street's thirst for quants. In the 1990s, the rise of search engineers prompted universities to recalibrate their computer science programs to keep up with the growing demand.
In the late 2010s, seeing the rush to cater to students eager for an analytics career, universities scrambled to restructure their mathematics and statistics departments to create data science courses and degrees.
Just like filling the skills gap for the hottest gigs in the '80s and '90s, this wasn't an overnight fix.
In 2015, I was teaching calculus, linear algebra and statistics for math and stats majors at a large Australian university. A few years later, just before I left my academic job for industry, I was teaching the exact same subjects – calculus, algebra and stats – now under the banner of data science.
These were pretty much the exact same courses!
By 2019, universities, colleges and MOOCs were churning out data science talent en masse, resulting in a glut of junior data scientists flooding the market. Vicki Boykis offered some sage advice at the time, which I still regard as sensible:
"Don't do what everyone else is doing, because it won't differentiate you. You're competing against a stacked, oversaturated industry and just making things harder for yourself. [The] number of data science positions is estimated at 50k. The number of data engineering postings is 500k. The number of data analysts is 125k.
It's much easier to come into a data science and tech career through the "back door", i.e. starting out as a junior developer, or in DevOps, project management, and, perhaps most relevant, as a data analyst, information manager, or similar, than it is to apply point-blank for the same 5 positions that everyone else is applying to.
It will take longer, but at the same time as you're working towards that data science job, you're learning critical IT skills that will be important to you your entire career."
The challenge is that formal courses and MOOCs only provide you with the technical skills necessary to get you through the door. They can't effectively teach you the domain knowledge and stakeholder management skills needed to thrive in an organisational setting – the soft skills needed to drive projects and reliably deliver value. Hence, as junior scientists saturated the market, the demand for experienced aka senior data scientists only continued to heat up.
I was exactly one of these junior data scientists, with some technical skills and no industry experience. Oh boy…
After starting at my bank in 2018, I was quickly thrown into large data projects, learning soft skills through osmosis and lots of trial and error, while rapidly identifying gaps in my technical knowledge and using online courses on the weekends to plug them.
New Age Skills – Data Engineering, MLOps & GenAI
By 2020, it became clear that data science was moving towards engineering, becoming sandwiched between the emerging fields of data engineering and MLOps in the data value chain:
Data scientists looking upstream needed lots of good data, a fundamental gap that managers in most firms realised was preventing them from becoming truly data-driven organisations.
"Without good data, AI is dead."
This birthed the need for Data Engineers (DE), the data plumbers of the firm, who would open up their plumbing toolkit to build the data pipelines that sourced, wrangled and cleaned the big data for data scientists.
At my bank, the lack of easily accessible and reliable data has really put a damper on our data science efforts. Following the current industry trend towards productising enterprise data, our DEs have stopped ingesting data for wasteful bespoke pipelines built for single projects. Instead, they are focusing on building out strategic, reusable and trustworthy data products.
Looking downstream, data scientists needed to deploy their trained models into production environments to serve real customers, monitor their performance for model and data drift and maintain models over their lifespan.
This has carved out a new role: the Machine Learning Engineer (MLE), currently one of the hottest gigs alongside the ‘AI engineer'.
MLEs take care of the operations side of machine learning (MLOps). With their blend of ML skills and software engineering chops, MLEs are the folks who take your pickle files, Python scripts and spaghetti code, refactor and optimise them into performant production code, ensuring your hard work realises real value by putting your model in front of customers.
Finally, there's the current 800-pound gorilla in tech: Generative AI.
In 2017, Google researchers came out with a game-changing AI architecture called the transformer. A year later, OpenAI took this and ran with it, pre-training transformers on massive datasets to create their famous Generative Pre-trained Transformer (GPT) foundation models.

ChatGPT, powered by a Large Language Model (LLM) built on GPT 3.5, became AI's iPhone moment, amassing a whopping 100 million users within two months of its release in early 2023.
Since then, companies worldwide have been scrambling to experiment with and adapt GenAI use cases. This has put a lot of the heavy lifting on data scientists and machine learning specialists, increasing the pressure to quickly learn new ideas and tools.

In just a few short years, many data scientists who had learned to train sklearn models on clean datasets in a Jupyter Notebook suddenly found themselves in a whole new frontier. They needed to spot the most promising GenAI use cases with limited funding, adapt LLMs for their firms using techniques like fine-tuning or Retrieval Augmented Generation (RAG) with cloud tools like Microsoft Azure ML or AWS Sagemaker, keep those pesky AI hallucinations in check and have enough experience to help deploy their handiwork into production to serve real customers.
At our bank, we've been diving into GenAI use cases through a mix of centralised planning and decentralised hackathons. Hackathons give our team the chance to brainstorm potential use cases and prototype working products in a blue-sky environment. These ideas then head to our Chief Technology Office, which maps out a strategic roadmap for AI adoption, the necessary data infrastructure and the guardrails needed. They ultimately decide which GenAI use cases our bank will take forward into full build and deployment.
The Modern Data Scientist
Back in 2018, Emmanuel Ameisen mused that:
"When it comes to recruiting, Hiring Managers of teams all over the valley most often complain that while there is no shortage of people able to train models on a dataset, they need engineers that can build data driven products."
Fast forward six years and most organisations and data science managers will tell you that while there's no longer a shortage of junior talent looking to prove their salt, they're now lookout for a special breed of data professionals:
The end-to-end data scientist.
These individuals not only have a firm grasp of core data science and modelling but also boast excellent soft skills and engineering abilities. Don't panic, though – these folks are exceedingly rare. Most organisations would be thrilled with an experienced data scientist who has some level of end-to-end skills.
Eugene Yan provides four compelling reasons why end-to-end data scientists are so valuable:
1. Gain crucial context
When a data scientist focuses solely on modelling, it can lead to tunnel vision and suboptimal outcomes.
Without visibility upstream in the data value chain, it can lead to tunnel vision that leads to suboptimal outcomes.
For example, if our Mortgages team notices a decline in new customers taking up loans and asks our data science team to train a new marketing model, we wouldn't jump the gun. Various causes could be at play:
- Products: Are our latest mortgage products compelling? Is there new competition from emerging fintechs? What about the external macroeconomic environment? A data scientist with the right domain knowledge, networks and soft skills might identify issues here.
- Data: Are our data pipelines functioning correctly? Is our data quality up to scratch? A data scientist with data engineering skills might be able to investigate our production ETL pipelines and spot any glaring issues.
- Models: Are our existing models still effective? Is there data or model drift? Is there an issue with the production environment? A data scientist with some MLOps skills might be able to diagnose what's going on with our existing deployed models.
Note how most of the issues above aren't ML problems. Indeed, as Yan contends:
"More often than not, the problem – and solution – lies outside of machine learning."
Similarly, without downstream visibility, it's hard to have the situational awareness to ensure that your ML solution aligns with your business's engineering and product constraints.
Our bank recently launched a Digital Instant Mortgage product where our 11 million customers can get unconditional approval for a multi-million dollar home loan in just minutes. The data scientists had to coordinate closely with our infrastructure and product teams to ensure that what they built was technologically feasible and integrated smoothly with our online banking app.
2. Bust communication overhead
More cooks in a kitchen means more overhead from all that extra coordination.
You need to spend more time communicating, aligning ideas and negotiating details, which ultimately means less focus time to knuckle down and do real work. (Bless the Focusing status on Microsoft Teams!)
To add insult to injury, communications hasn't gotten easier during the COVID-era, where half my colleagues are at home on any given day. It's a trickier affair to do technical whiteboarding – like collaborating on diagrams and doing math – on Teams calls than it is to get everyone in a room with a real whiteboard and workshop it out. (Don't get me wrong, I'm a huge proponent of WFH!)
The biggest problem is communication overhead grows almost exponentially as your team size increases. Richard Hackman, a Harvard psychologist, showed that the number of relationships in a team is:

Adding a few extra cooks in that kitchen quickly explodes the number of network links. This means your team spends disproportionately more time on Skype and Teams, increasing the risk of misalignment and misunderstanding the broader picture.
In the data science process, you can see how this plays out for our data engineer, data scientist and machine learning engineer triumverate.
DE & DS have to align on how to clean data and engineer features. With data science being so iterative, these conversations might be ongoing throughout the project. Meanwhile, MLE and DS need to discuss how to deploy, monitor and maintain the model. MLE and DE might need to check in to discuss things like pipeline scalability.
If your organisation is small enough, there might just be one of each role, leading to six relationships to maintain for the fledging DE/DS/MLE trio. In large firms like mine, data scientists often need to involve various other technical stakeholders in the loop, like Service Operation Managers (SOM), Scrum Masters (SM) and Product Owners (PO), who might then unnecessarily interject on various matters due to a lack of context. Getting the smallest things across the line can sometimes be a challenge.
All in all, a data scientist with a good amount of engineering skills might be able to cut down on a significant chunk of this communication overhead.
3. Drive ownership and accountability
Splitting the data science process across multiple roles can sometimes lead to responsibility dodging, as siloed individuals focus on their specific tasks and leave everything else to others.
This often results in the classic ‘Throw It Over the Wall' scenario:
The DE crafts some features and tosses a bunch of database tables over to the DS.
"Data delivered! May the nulls be ever in your favour."
The DS trains the model and sends a mishmash of Python scripts flying over to the ML engineer, who's left to make sense of it all. With the least context on the business problem, data and modelling approach, they're expected to turn it into clean and performant production-grade code.
"Model's ready! My spaghetti code might need some TLC."
When something goes wrong and a fire starts, who are the firefighters? Get ready for finger-pointing as various players do their best to cover their own asses.
This kind of behaviour has been well-documented by researchers:
- Diffusion of Responsibility: When others are present, people are less likely to take ownership, responsibility or act, like the infamous case of the Bystander Effect where 38 people watched a woman get stabbed but didn't intervene.
- Social Loafing: We tend to put in less effort when working in a group than when working alone.
An end-to-end data scientist with strong engineering skills is empowered to take full ownership of the entire data science process from start to finish. This enables them to handle everything from understanding customer needs to model deployment, using the right metrics to evaluate the success of the project as a complete, data-driven product.
4. Iterate and learn faster
As Yan succinctly puts it:
"With greater context and lesser overhead, we can now iterate, fail (read: learn), and deliver value faster."
Being able to see the whole picture and iterate quickly fosters an ideal ‘fail fast and learn' ethos, which directly translates to a faster pace of innovation for the firm.
All up, becoming more end-to-end can boost a data scientist's motivation and job satisfaction. They get more autonomy to tackle problems on their own, more chances to upskill and master their craft and a stronger sense of purpose from having a direct connection to their work and its outcomes.
Skills of an End-to-End Data Scientist
So what does it take to become more end-to-end? I like to categorise the full range of skills across three ‘tiers‘:
Tier 1 Skills
These are the foundational skills you gain from formal education, whether from university or various online courses and bootcamps.
- Programming: Especially scripting languages like SQL for data cleaning and wrangling and Python/R for prototyping ML models.
- Data Analysis: Understanding and visualising data and using statistical tools like A/B testing and inference to make data-driven claims.
- Machine Learning: Engineering features, training and tuning models and choosing the right metrics.

Tier 2 Skills
These skills are naturally acquired through experience as a practising data scientist in the industry.
- Soft Skills: Engaging with stakeholders, facilitating across teams, communicating effectively and gaining buy-in.
- Product Skills and Business Acumen: Understanding customer problems and pain points and crafting requirements that enable your solution to maximise impact.
- Domain Expertise: Building knowledge of industry-specific trends, business processes, relevant domain-specific metrics and developing an intuition for what techniques work (and don't) for ML problems in your domain.
Tier 3 Skills
These are ‘frontier' skills that take you closer to being an end-to-end data scientist – skills you can leverage to potentially pivot into new roles or careers:
- Data Engineering: Mobilising large volumes of data from different sources, using the right tools to build scalable and performant pipelines, and integrating that data into a data warehouse or data lake. These are the skills of your firm's data engineers, who clean and wrangle data into a usable state and build and maintain an organisation's data plumbing system for a living.
- MLOps: The art of deploying, monitoring and maintaining ML models in production. Machine learning engineers need to be familiar with containerisation tools like Docker, orchestration frameworks like Kubernetes, CI/CD pipelines and DevOps principles, model monitoring for data and model drift, and scalability considerations.
- Software Engineering: Writing code that is not only functional but also well-structured, modular and maintainable. This includes knowledge of design patterns, code optimisation and testing methodologies.

- Generative AI: Gartner recently identified GenAI as the most sought-after AI solution by firms worldwide, with the most common use case being a ‘private ChatGPT‘ for employees and customers. This has created a demand for data scientists and ML practitioners to quickly learn the dark arts of fine-tuning and RAG on foundation models. Fine-tuning adjusts a FM's model paramters without starting from scratch but still requires a significant amount of clean and relevant data, which has caught many firms off guard, resulting in underwhelming results. RAG remains the most popular approach for enterprise GenAI adoption, as it fortifies against hallucinations while ensuring AI outputs are traceable – two crucial requirements, especially for regulated industries like banking and healthcare where AI safety is paramount.

Final Words
Since being dubbed the ‘sexiest job of the 21st century' in 2012, data science has traversed every stage of the Gartner hype cycle.
Where do we go from here? I offer two main tips:
Firstly, as the enterprise data and tech landscape evolves, there will be no shortage of new skills and technologies for data scientists to sink their teeth into. Understand that very few will ever become true end-to-end data scientists – a unicorn rarer than the original data scientist unicorn by an order of magnitude – and that's okay.
I hope this is a relief and that you can focus on the journey of becoming more end-to-end by gradually picking up engineering skills while refining your business acumen, domain knowledge and soft skills.

Some suggestions to catalyse your progress:
- Volunteer for challenging and messy projects at work. This one shouldn't come as a surprise.
- Hustle on your own end-to-end projects on the side. Brainstorm business ideas with product-market fit, gather messy data and finish up with a deployed app. The entrepreneurial skills you'll gain alone will make this journey worthwhile.
- Join a startup-like team in your company that requires team members to wear multiple hats and iterate products with aggressive deadlines. Learn fast or die. It's a great way to internalise the 80–20 rule and become a better project manager because you'll only have time for the most important, high-impact tasks and make daily decisions on what to prioritise.
All up, you'll want to live and breath growth mindset and inhale opportunities to develop yourself outside of your core competencies.
Secondly, be prepared for the possibility that data science as we know it may be completely different in a decades time.
A hallmark of every widely adopted technology is that it becomes easier to use over time and democratised to the masses. Analytics is no different. Self-serve BI tools like Power BI and Tableau make it increasingly easy to surface insights with just a few clicks. Similarly, the rise of all-in-one no-code ML platforms, like Alteryx and Dataiku, is putting the power of advanced analytics into the hands of every day knowledge workers.
Every Wednesday afternoon, I host a Dataiku training session for our bank's users. A regular data scientist from their Singapore office dials in to do a demo, like how to do Natural Language Processing (NLP) on the Dataiku Data Science Studio platform.
Three years ago, they showcased how to pull up the NLTK Python package in a Jupyter Notebook, which was directly integrated inside Dataiku. Nothing terribly exciting for practicising data scientists. Two years ago, he demonstrated how to do everything – tokenisation, stemming, lemmatisation – using codeless Dataiku Recipes. This was interesting, as it put the power of NLP into the hands of those who couldn't code. Last year, our Dataiku expert told our audience to forget everything and use Dataiku‘s latest GenAI integration. Now even our data analysts and Excel junkies could do NLP.
Fascinatingly, a Dataiku rep predicted at our firm's recent annual data conference that the concept of an all-in-one ML platform as it stands today would soon be obsolete. Knowledge workers will simply use natural language to query AI for insights, with all the number-crunching abstracted away.
Things are moving so fast in this space, right?
Does this mean it's futile to upskill in data science? Software engineers are asking a similar question in the age of GPT-4 and GenAI copilots that can write entire apps in seconds.
My take: don't be too alarmist.
Skills like data science and engineering, at their core, are fundamentally about problem-solving, making them extremely transferable. I can't predict what new careers and roles these skills might exactly transfer into, but there's a good chance the role you'll be doing in a decade doesn't yet exist today.
Don't lose sleep over what you can't control, keep investing in yourself – the best investment anyone can make – and just enjoy the ride.
Find me on Twitter & YouTube [[here](https://youtube.com/@col_shoots)](https://youtube.com/@col_invests), here & here.
My Popular AI, ML & Data Science articles
- AI & Machine Learning: A Fast-Paced Introduction – here
- Machine Learning versus Mechanistic Modelling – here
- Data Science: New Age Skills for Modern Data Scientists – here
- Generative AI: How Big Companies are Scrambling for Adoption – here
- ChatGPT & GPT-4: How OpenAI Won the NLU War – here
- GenAI Art: DALL-E, Midjourney & Stable Diffusion Explained – here
- Beyond ChatGPT: Search for a Truly Intelligence Machine – here
- Modern Enterprise Data Strategy Explained – here
- From Data Warehouses & Data Lakes to Data Mesh – here
- From Data Lakes to Data Mesh: A Guide to Latest Architecture – here
- Azure Synapse Analytics in Action: 7 Use Cases Explained – here
- Cloud Computing 101: Harness Cloud for Your Business – here
- Data Warehouses & Data Modelling – a Quick Crash Course – here
- Data Products: Building a Strong Foundation for Analytics – here
- Data Democratisation: 5 ‘Data For All' Strategies – here
- Data Governance: 5 Common Pain Points for Analysts – here
- Power of Data Storytelling – Sell Stories, Not Data – here
- Intro to Data Analysis: The Google Method – here
- Power BI – From Data Modelling to Stunning Reports – here
- Regression: Predict House Prices using Python – here
- Classification: Predict Employee Churn using Python – here
- Python Jupyter Notebooks versus Dataiku DSS – here
- Popular Machine Learning Performance Metrics Explained – here
- Building GenAI on AWS – My First Experience – here
- Math Modelling & Machine Learning for COVID-19 – here
- Future of Work: Is Your Career Safe in Age of AI – here