Use langchain-core to build LLM applications with composable, modular abstractions. Access base classes for prompts, models, output parsers, and chains.
Install and use the `langchain-core` Python package to build LLM applications with composable, modular abstractions.
This skill helps you install and work with `langchain-core`, the foundational package of the LangChain ecosystem. It provides base abstractions for prompts, language models, output parsers, runnables, and chains that enable building complex LLM applications through composability.
First, install the package in your Python environment:
```bash
pip install langchain-core
```
For a specific version:
```bash
pip install langchain-core==1.2.7
```
LangChain Core provides these key abstractions:
Import the base classes you need:
```python
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser
from langchain_core.runnables import RunnablePassthrough, RunnableSequence
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
```
Use the pipe operator to compose runnables:
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | model | StrOutputParser()
result = chain.invoke({"topic": "programming"})
```
Structure conversational interactions:
```python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="What is LangChain?")
]
```
Extend base classes to create custom components:
```python
from langchain_core.output_parsers import BaseOutputParser
class CustomParser(BaseOutputParser):
def parse(self, text: str):
# Your custom parsing logic
return text.strip().upper()
```
For production monitoring and testing:
**Simple chain with prompt and parser:**
```python
chain = prompt_template | model | output_parser
result = chain.invoke(input_data)
```
**Parallel execution:**
```python
from langchain_core.runnables import RunnableParallel
parallel = RunnableParallel(
summary=summarize_chain,
translation=translate_chain
)
```
**Conditional routing:**
```python
from langchain_core.runnables import RunnableBranch
branch = RunnableBranch(
(condition1, chain1),
(condition2, chain2),
default_chain
)
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/langchain-core-package/raw