We all know the AI landscape is moving fast. Building a solid foundation requires more than just reading headlines—it demands hands-on experience and a guided approach to the core concepts.

For some context, I’m no stranger to Machine Learning, I remember building my first Neural Network to predict weather back at university (with some success) and even attempted to classify clinical coding data prior to modern advancements in the field. I’m very aware that due to the clients I work with my knowledge is a bit historic and my hands on experience has been limited.

To tackle this, I recently dedicated a couple of learning days to diving deep into modern AI principles. My goal was simple: build foundational AI knowledge, get hands-on experience with multiple tools (like Claude, Gemini, and CoPilot), and ultimately, produce a set of resources to help others get started.

What follows is the curated list of resources and topics that formed the backbone of this deep dive.



1. Getting Started: Foundational Knowledge

For a solid introduction, I began with a course on Agentic AI—a great entry point for anyone new to modern AI concepts. It provides a good overview of the topic with practical, non-coding-based examples.

I’m very aware that not everyone uses Pluralsight, as such here is a non subscription alternative from Udermy that covers similar topics.

2. Essential Modern AI Concepts

The following topics have been highlighted as crucial areas for any AI enthusiast or developer to understand. I've compiled the video resources that I've found useful and alternative text references for those that learn better from reading.

 
Model Context Protocol: An open API standard for connecting to components in an AI system.

Vector Databases: Specialised databases used to store and manage vector embeddings for efficient similarity search.

RAG (Retrieval-Augmented Generation): A technique linked with vector indices that improves model output by retrieving facts from external knowledge bases.

Prompt Patterns: Structured approaches and best practices for engineering effective prompts to guide an AI model.

Tool Use / Function Calling: The ability of an LLM to identify and execute external functions or tools to complete a request.

Embeddings: Vector representations of text, images, or other data that capture their semantic meaning.

Context Limits / Tokenisation: Understanding the maximum amount of input an LLM can process and how text is broken down into tokens.

Guardrails and Safety Layers: Mechanisms and policies implemented to ensure AI systems operate safely and ethically.

Next Steps

Building knowledge is just the first step. The true learning comes from hands-on work. My next steps involve getting Claude up on running on my local machine and diving into developing a tool idea for career monitoring—a tool that I’ve been wanting to develop for some time.

If you're on your own AI learning journey, I hope this curated list gives you some solid resources to understand the fundamentals. Happy learning!