Fine-tune a Mistral-7b model with Direct Preference Optimization
Boost the performance of your supervised fine-tuned models- 21408Murphy2025-03-22
How ChatGPT is Transforming the Way We Teach Software Development
Learning to code when AI assistants already master the skill- 28701Murphy2025-03-22
A Winding Road to Parameter Efficiency
Deliberately Exploring Design Decisions for Parameter Efficient Finetuning (PEFT) with LoRA- 25081Murphy2025-03-22
SW/HW Co-optimization Strategy for LLMs – Part 2 (Software)
SW is eating the world. SW landscape of LLMs? What are the emerging libraries/SW frameworks to improve LLM performance?- 24180Murphy2025-03-22
A Surgeon's Reflections on Artificial Intelligence
A Clinical Perspective on Medical Innovation- 22161Murphy2025-03-22
Tuning-Free Longer Context Lengths For LLMs – A Review of Self-Extend (LLM Maybe LongLM)
A simple strategy to enable LLMs to consume longer context length inputs during inference without the need for finetuning.- 24923Murphy2025-03-22
Philosophy and data science – Thinking deeply about data
Part 3: Causality- 28138Murphy2025-03-22
2024: The year of the value-driven data person
Growth at all costs has been replaced with a need to operate efficiently and be ROI-driven-data teams are no exception- 21234Murphy2025-03-22
Prompt Engineering, Agents, and LLMs: Kickstart a New Year of Hands-On Learning about AI
The stories that resonated the most with our community in the past month- 28202Murphy2025-03-22
Generative AI is a Gamble Enterprises Should Take in 2024
LLMs today suffer from inaccuracies at scale, but that doesn't mean you should cede competitive ground by waiting to adopt generative AI.- 27108Murphy2025-03-22
Navigating the AI Landscape of 2024: Trends, Predictions, and Possibilities
2024 beckons with a promise of innovation - a year where AI and technology converge to redraw the maps of possibility.- 29948Murphy2025-03-22
How to Cut RAG Costs by 80% Using Prompt Compression
Accelerating Inference With Prompt Compression- 22290Murphy2025-03-22
What Makes A Strong AI?
"The Book of Why" Chapters 9&10, a Read with Me series- 23711Murphy2025-03-22
LLMs for Everyone: Running the LLaMA-13B model and LangChain in Google Colab
Experimenting with Large Language Models for free (Part 2)- 22392Murphy2025-03-22
Future-Proof The Value Of Your Data Science Capability
By integrating data-engineering aptitude- 26396Murphy2025-03-22
What Next? Exploring Graph Neural Network Recommendation Engines
It's so difficult to decide what to watch next. Let's build an AI algorithm to do it for us!- 29046Murphy2025-03-22
Data Science Better Practices, Part 2 – Work Together
You can't just throw more data scientists at this model and expect the accuracy to magically increase.- 23426Murphy2025-03-22
AI-Powered Customer Support App: Semantic Search with PGVector, Llama2 with an RAG System, and…
Enhancing Communication in Global Markets: Leveraging PGVector for Multilingual Semantic Search, Llama2-Powered RAG Systems, and...- 20576Murphy2025-03-22
Why Do Data Teams Fail at Delivering Tangible ROI?
Identifying the popular obstacles of data teams in delivering tangible ROI- 22838Murphy2025-03-22
Methods for generating synthetic descriptive data
Use various data source types to quickly generate text data for artificial datasets.- 26863Murphy2025-03-22
Genius Cliques: Mapping out the Nobel Network
Combining Network Science, Data Visualization, and Wikipedia to uncover hidden connections between all the Nobel laureates.
Data Science Expertise Comes in Many Shapes and Forms
Our weekly selection of must-read Editors' Picks and original features
