Curated resources on Azure OpenAI, Large Language Models (LLMs), and applications.
πΉBrief each item on a few lines as possible.
πΉThe dates are based on the first commit, article publication, or paper version 1 issuance.
πΉCapturing a chronicle and key terms of that rapidly advancing field.
πΉDisclaimer: Please be aware that some content may be outdated.
- Section 1: RAG
- Section 2: Azure OpenAI
- Section 3: LLM Applications
- Section 4: Agent
- Section 5: Semantic Kernel & DSPy
- Section 6: LangChain & LlamaIndex
- Section 7: Prompting & Finetuning
- Section 8: Challenges & Abilities
- Section 9: LLM Landscape
- Section 10: Surveys & References
- Section 11: AI Tools & Extensions
- Section 12: Datasets
- Section 13: Evaluations
- Microsoft LLM Framework
- Copilot Products & Azure OpenAI Service | Research
- Azure Reference Architecture
- Semantic Kernel: Micro-orchestration
- DSPy: Optimizer frameworks
- LangChain Features: Macro & Micro-orchestration
- LangChain Agent & Criticism
- LangChain vs Competitors
- LlamaIndex: Micro-orchestration & RAG
- Prompt Engineering
- Finetuning: PEFT (e.g., LoRA), RLHF, SFT
- Quantization & Optimization
- Other Techniques: e.g., MoE
- Visual Prompting
- AGI Discussion & Social Impact
- OpenAI Products & Roadmap
- Context Constraints: e.g., RoPE
- Trust & Safety
- LLM Abilities
- LLM Taxonomy
- LLM Collection
- Domain-Specific LLMs: e.g., Software development
- Multimodal LLMs
- LLM Surveys | Business Use Cases
- Building LLMs: from scratch
- LLMs for Korean & Japanese
- Learning and Supplementary Materials
ref
: external URLdoc
: archived doccite
: the source of commentscnt
: number of citationsgit
: GitHub linkx-ref
: Cross reference- πΊ: YouTube or video
- π‘ or π: recommendation