Skip to content

Commit 14cf075

Browse files
committed
add. page. basemin - prompt.
1 parent 9dc9d46 commit 14cf075

49 files changed

Lines changed: 2883 additions & 78 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

blog/content/page/playlists/index.en.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ menu:
1111

1212
## 📚 Basics
1313

14+
Short track on AI: terms and language models → LLM overview → **prompt engineering** as its own article.
15+
1416
- **[AI basics – introduction](/en/p/ai-basics-intro/)**
1517
- 🎬 [Video](https://youtu.be/z9VBZn0XcVk) · reading 5 min / video 4 min
1618
- 🏷️ Terms: language model, dataset, parameters, AI/ML/NN, foundation models
@@ -22,6 +24,12 @@ menu:
2224
- 🏷️ LLM, transformer, token, temperature, prompt engineering, RAG, fine-tuning, RLHF, chain-of-thought, context window, hallucinations, multimodality, agents
2325
- 📊 Difficulty: basic
2426
- 📋 Prerequisites: none (intro to the series is enough)
27+
- **[AI basics – prompt engineering](/en/p/ai-basics-prompt-engineering/)**
28+
- ⏱️ ~7 min read · no video
29+
- 📋 Zero-shot, few-shot, chain-of-thought, roles, step-back, where prompts come from; mind-map layout; accordions
30+
- 🏷️ prompt engineering, zero-shot, few-shot, CoT, role prompting, step-back, in-context learning
31+
- 📊 Difficulty: basic
32+
- 📋 Prerequisites: [LLM overview](/en/p/ai-basics-overview/) recommended
2533

2634
## 👥 On Their Shoulders
2735

blog/content/page/playlists/index.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ menu:
1111

1212
## 📚 Базовый минимум
1313

14+
Короткий цикл про ИИ: термины и языковые модели → обзор LLM → **промпт-инжиниринг** как отдельная статья.
15+
1416
- **[Базовый минимум про ИИ – введение](/p/ai-basics-intro/)**
1517
- 🎬 [Видео](https://youtu.be/z9VBZn0XcVk) · чтение 5 мин / видео 4 мин
1618
- 🏷️ Термины: языковая модель, датасет, параметры, ИИ/МО/НС, фундаментальные модели
@@ -22,6 +24,12 @@ menu:
2224
- 🏷️ LLM, трансформер, токен, температура, промпт-инжиниринг, RAG, fine-tuning, RLHF, chain-of-thought, контекстное окно, галлюцинации, мультимодальность, агенты
2325
- 📊 Сложность: базовая
2426
- 📋 Необходимые знания: нет (достаточно введения в цикл)
27+
- **[Базовый минимум про ИИ – промпт-инжиниринг](/p/ai-basics-prompt-engineering/)**
28+
- ⏱️ чтение ~7 мин · без видео
29+
- 📋 Zero-shot, few-shot, chain-of-thought, роли, step-back, источники промптов; структура как на mind map; аккордеоны
30+
- 🏷️ промпт-инжиниринг, zero-shot, few-shot, CoT, ролевой промптинг, step-back, in-context learning
31+
- 📊 Сложность: базовая
32+
- 📋 Необходимые знания: желательно [обзор LLM](/p/ai-basics-overview/)
2533

2634
## 👥 На их плечах
2735

blog/content/post/ai-basics-intro/index.en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ image: cover.jpg
1212

1313
This is the first article in the “Bare minimum” series — a concise look at how AI works. Each piece will cover one idea or concept; I’ll try to keep them short and in order.
1414

15-
We’ll start with common terms, then talk about prompt engineering, RAG systems, and agents that can handle complex multi-step tasks.
15+
We’ll start with common terms; elsewhere in the series — [prompt engineering](/en/p/ai-basics-prompt-engineering/), RAG systems, and agents that can handle complex multi-step tasks.
1616

1717
**Video:** [Watch on YouTube](https://youtu.be/z9VBZn0XcVk)
1818

blog/content/post/ai-basics-intro/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ image: cover.jpg
1212

1313
Это первая статья из цикла «Базовый минимум» — коротко про то, как устроен ИИ. Каждая статья будет освещать одно понятие или концепцию, постараюсь делать их короткими и последовательными.
1414

15-
Начнём с общепринятых терминов, далее поговорим про промпт-инжиниринг, RAG-системы и агентов, способных выполнять сложные многоступенчатые задачи.
15+
Начнём с общепринятых терминов; отдельно в цикле — [промпт-инжиниринг](/p/ai-basics-prompt-engineering/), RAG-системы и агенты, способные выполнять сложные многоступенчатые задачи.
1616

1717
**Видео версия:** [Смотреть на YouTube](https://youtu.be/z9VBZn0XcVk)
1818

blog/content/post/ai-basics-overview/index.en.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,8 @@ This overview for the «Bare minimum» series is an outline:
124124
<li>Iteratively refining prompts for accurate answers</li>
125125
</ul>
126126

127+
<p>Deep dive by technique (zero-shot, few-shot, CoT, roles, step-back, …): <a class="link" href="/en/p/ai-basics-prompt-engineering/">AI basics – prompt engineering</a>.</p>
128+
127129
<p><strong>Fine-tuning</strong></p>
128130
<ul>
129131
<li>Adapting the model to specific tasks and domains</li>

blog/content/post/ai-basics-overview/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,8 @@ image: cover.jpg
124124
<li>Итеративное улучшение запросов для точных ответов</li>
125125
</ul>
126126

127+
<p>Развёрнуто по техникам (zero-shot, few-shot, CoT, роли, step-back и др.): <a class="link" href="/p/ai-basics-prompt-engineering/">Базовый минимум про ИИ – промпт-инжиниринг</a>.</p>
128+
127129
<p><strong>Fine-tuning (дообучение)</strong></p>
128130
<ul>
129131
<li>Адаптация модели под специфические задачи и домены</li>
453 KB
Loading
Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
---
2+
title: "AI basics – prompt engineering"
3+
description: "Prompt engineering: goals, zero-shot and few-shot, chain-of-thought, roles, step-back, and where good prompts come from"
4+
date: "2026-03-25"
5+
slug: "ai-basics-prompt-engineering"
6+
tags:
7+
- Artificial Intelligence
8+
- Machine Learning
9+
- База
10+
image: cover.jpg
11+
---
12+
13+
A «Bare minimum» article on how to phrase requests to large language models.
14+
15+
### Goal and essence {.toc-heading-only}
16+
17+
<details class="post-accordion">
18+
<summary style="cursor: pointer; font-weight: 600;">Goal and essence</summary>
19+
<div style="margin-top: 0.75em;">
20+
21+
<p><strong>A prompt is a short technical spec for the model.</strong> You state what to do, how to format the answer, and what to rely on. The clearer the spec, the less the model has to guess.</p>
22+
23+
<ul>
24+
<li><strong>Quality.</strong> Clear instructions reduce vagueness and improve usefulness of text, code, or structured output.</li>
25+
<li><strong>Predictability.</strong> Fixed formats (lists, JSON, paragraph templates) and explicit constraints make outputs repeatable across runs.</li>
26+
<li><strong>Building blocks.</strong> Treat the prompt as a mini-spec: role, context, task, output format, examples (if needed), success criteria.</li>
27+
</ul>
28+
29+
</div>
30+
</details>
31+
32+
### Zero-shot prompting {.toc-heading-only}
33+
34+
<details class="post-accordion">
35+
<summary style="cursor: pointer; font-weight: 600;">Zero-shot prompting</summary>
36+
<div style="margin-top: 0.75em;">
37+
38+
<p><strong>The task is given with no input–output examples.</strong> The model leans on pretraining plus your instruction in the current request.</p>
39+
40+
<ul>
41+
<li>Works well for simple, unambiguous tasks when the desired format is obvious or one line away.</li>
42+
<li><strong>Example:</strong> sentiment classification (“label as positive / neutral / negative”) with no labeled examples in the prompt.</li>
43+
</ul>
44+
45+
<p>If zero-shot drifts, few-shot examples or explicit step-by-step reasoning (CoT) usually help.</p>
46+
47+
</div>
48+
</details>
49+
50+
### Few-shot prompting {.toc-heading-only}
51+
52+
<details class="post-accordion">
53+
<summary style="cursor: pointer; font-weight: 600;">Few-shot prompting</summary>
54+
<div style="margin-top: 0.75em;">
55+
56+
<p><strong>In-context learning:</strong> you add one or more “input → gold output” pairs so the model picks up style, fields, and constraints.</p>
57+
58+
<ul>
59+
<li>Especially useful when you need a <strong>strict or unusual format</strong> — tables, JSON with fixed keys, report templates.</li>
60+
<li>Examples act as a contract for the answer: fewer arbitrary interpretations.</li>
61+
</ul>
62+
63+
<p>Do not overload context: keep examples relevant, representative, and within the context window.</p>
64+
65+
</div>
66+
</details>
67+
68+
### Chain-of-Thought (CoT) {.toc-heading-only}
69+
70+
<details class="post-accordion">
71+
<summary style="cursor: pointer; font-weight: 600;">Chain-of-Thought (CoT)</summary>
72+
<div style="margin-top: 0.75em;">
73+
74+
<p><strong>Reasoning chain:</strong> the model emits intermediate steps, then the final answer. That tends to stabilize logic, arithmetic, and multi-step tasks.</p>
75+
76+
<ul>
77+
<li><strong>Few-shot CoT:</strong> examples show not only the answer but the reasoning path — the model mimics that pattern.</li>
78+
<li><strong>Zero-shot CoT:</strong> phrases like “think step by step” / “explain your reasoning first, then answer” often suffice.</li>
79+
<li><strong>Uncertainty-routed CoT:</strong> explore multiple reasoning lines or alternatives when the task is ambiguous, then compare or pick a justified conclusion.</li>
80+
</ul>
81+
82+
<p>CoT lengthens responses and latency; for trivial tasks a short instruction without reasoning may be enough.</p>
83+
84+
</div>
85+
</details>
86+
87+
### Where prompts come from {.toc-heading-only}
88+
89+
<details class="post-accordion">
90+
<summary style="cursor: pointer; font-weight: 600;">Where prompts come from</summary>
91+
<div style="margin-top: 0.75em;">
92+
93+
<ul>
94+
<li><strong>You define the goal.</strong> The primary source is your statement of task, audience, and quality bar.</li>
95+
<li><strong>LLM-generated drafts.</strong> Ask the model for a structured prompt (role, steps, format), then edit manually.</li>
96+
<li><strong>Reverse engineering.</strong> From a desired output (or a great response), reconstruct and refine what in the prompt made it work.</li>
97+
</ul>
98+
99+
<p>In practice people combine model drafts, hard constraints, and iteration on real outputs.</p>
100+
101+
</div>
102+
</details>
103+
104+
### Role-based prompting {.toc-heading-only}
105+
106+
<details class="post-accordion">
107+
<summary style="cursor: pointer; font-weight: 600;">Role-based prompting</summary>
108+
<div style="margin-top: 0.75em;">
109+
110+
<p><strong>Explicit role and perspective:</strong> “you are an editor”, “economist for non-experts”, “comedian in the style of …”. That steers tone, depth, and granularity.</p>
111+
112+
<ul>
113+
<li>Especially helpful for <strong>open-ended</strong> work: explanations, creative writing, advice when there is no single “right” format.</li>
114+
<li><strong>Role examples:</strong> public speaker, domain expert, comedian, teacher — role changes vocabulary, structure, and how bold the model can be.</li>
115+
</ul>
116+
117+
<p>Role complements but does not replace a clear task and constraints; “you are an expert” without context helps less than expert + goal + format.</p>
118+
119+
</div>
120+
</details>
121+
122+
### Step-back prompting {.toc-heading-only}
123+
124+
<details class="post-accordion">
125+
<summary style="cursor: pointer; font-weight: 600;">Step-back prompting</summary>
126+
<div style="margin-top: 0.75em;">
127+
128+
<p><strong>A CoT-style variation:</strong> first step back to general principles, definitions, or a standard method, <strong>then</strong> apply them to the specific case.</p>
129+
130+
<ul>
131+
<li>Start with guiding questions: which laws, patterns, or concepts matter for this task?</li>
132+
<li>Then map onto your instance: data, constraints, desired output.</li>
133+
</ul>
134+
135+
<p>Useful when failures come from jumping to an answer without anchoring on the right background knowledge.</p>
136+
137+
</div>
138+
</details>

0 commit comments

Comments
 (0)