New: Agile-Environment.md, Search_Requests.md

This commit is contained in:
tlg
2026-03-13 17:25:53 +01:00
parent 0056946e8a
commit 1fcec1a954
2 changed files with 197 additions and 0 deletions

154
Agile-Environment.md Normal file
View File

@@ -0,0 +1,154 @@
# Agile environment
## About me and my setup
I am an AI consultant who also can implement solutions.
I have paid subscriptions to Miro, Gemini, ChatGPT and Manus.
I am running Linux servers.
In my VSCodium + Cline environment I currently use Gemini or OpenAI via API.
## Idea for a new Agile environment
I have the idea to replace my current manually managed Sprint planning and tracking board (which is implemented with Inkscape) by something which is AI-supported.
I do not want to subscribe to another commercial tool because of budget reasons. I like high-quality FOSS tools and I am able to build own apps using AI support.
My current idea is using Claude Code + Miro MCP server + Miro and let Claude Code build my setup according to my needs.
## Your task
Evaluate my current idea, propose alternatives and compare my idea to your alternatives.
When it comes to sprint planning and progress tracking, I really like the visual UX of my current process. I usually have a few dozens of tasks in the backlog, not hundrets.
A quick check of Plane Community Edition and comparing against my current flow shows:
- In Plane, a Work item Priority can just be one of {urgent, high, medium, low, none}. In my current process I had sorted my sprint task priorities by unique numbers, so Plane would restrict me.
- Like on a physical whiteboard with sticky notes of different colors for the tasks, my current process has several topic areas and for each topic area the notes have a dedicated color. This allows to see immediately which note is for wich topic area - very helpfull when many notes are in the backlog. I think in Plane this is achievable using the Work item Issue Type feature which also allows custom properties. However, many clicks are needed to create a new Work item with a certain Issue Type - can I copy an existing Work item to have the Issue Type already in the copied Work item?
- In Plane, the backlog looks like a list of Work items where just the sequential identifier and the description is shown. In my current process each note shows also the priority and the estimated effort. I can group the notes together by placing them next to each other. In a sprint planning, when moving notes to the In-Progress area, I can sum up the estimated effort which gives an indication whether the planned tasks are achievable within the sprint.
Are there any FOSS alternatives other than Plane which have a visual UX with a notes-style and allow AI to access the elements?
Nice Comparison of Alternatives table, but please add a column for Taiga and provide the table in Raw Markdown format because it is wider as our chat window width.
Can I self-host AFFiNE and Taiga using containers?
| Feature | Plane | AFFiNE | Tldraw (Custom Build) | Taiga |
| :--- | :--- | :--- | :--- | :--- |
| **UX Philosophy** | Structured Database | Infinite Canvas / Hybrid | Pure Visual Canvas | Agile-Native Visual |
| **Priority Style** | Fixed (Urgent to Low) | Custom Block Properties | Free-text on Notes | Customizable Priorities |
| **Color Coding** | Label-based | Any background color | Any shape/note color | Fully customizable types |
| **Effort Summing** | Hidden in sub-menus | Manual or via API | Custom Script/AI calc | Built-in (Taiga Seed) |
| **Task Creation** | Form-based (Multi-click) | One-click (Stamp block) | Zero-click (Drag box) | Rapid "User Story" entry |
| **Visual Logic** | Strict Columns/Lanes | Free-form Spatial | Free-form Spatial | Kanban + Scrum Sprints |
| **AI Access** | Integrated "Pi" Agent | Native AI + API | Official MCP + Hooks | Mature REST API |
| **FOSS License** | AGPL v3.0 | MIT / FOSS | MIT / FOSS | MPL 2.0 / FOSS |
All three Plane alternatives look like strong candidates. Please compare community sizes, number of contributors and maturity / age of the three.
| Metric | Taiga | AFFiNE | Tldraw |
| :--- | :--- | :--- | :--- |
| **Project Age** | ~12 years (Launched 2014) | ~4 years (Launched 2022) | ~4 years (Seed Dec 2022) |
| **Maturity** | Very High (Enterprise-ready) | High (Moving toward stable 1.0) | High (Stable SDK, expanding AI) |
| **GitHub Stars** | ~10k+ (Across major repos) | ~65.8k Stars | ~45.8k Stars |
| **GitHub Forks** | ~1.5k+ (Across major repos) | ~4.6k Forks | ~3.1k Forks |
| **Contributors** | 150+ Total | ~100+ (Highly active) | ~100+ (Strong SDK adoption) |
| **Primary Use** | Agile Project Management | All-in-one Knowledge OS | Collaborative Whiteboard SDK |
My favorite was AFFiNE but when looking at their templates I found just two Agile relevant templates and both of them are far from what I want to have.
Since I'm surely not the only one who thinks about visual UX Sprint planning and tracking with backlog, there will be examples in the internet when AFFiNE is well suited for this. Please search such examples and take into account what I wrote earlier about my current process.
Thanks - you generated a nice recommendation how I could build Sprint planning and tracking with AFFiNE by myself. But that was not what I asked for.
Your task: Search the internet for examples how others built their Sprint planning and tracking with AFFiNE.
< Gemini fails to understand the search request - likely context already too long >
Please write a request document in Markdown format with my requests for the
visual UX Scrum taskboard and provide it in Raw Markdown format.
# Requirements Specification: Visual AI-Native Scrum Taskboard
## 1. Vision
To transition from a manual, static SVG-based (Inkscape) workflow to a dynamic, AI-supported "Operating System" that preserves the spatial/visual intuition of a physical whiteboard while leveraging the structured intelligence of LLMs and MCP servers.
## 2. Visual UX & Spatial Organization
* **Canvas-First Workflow:** The primary interface must be an infinite, edgeless canvas (whiteboard style) where tasks are represented as "sticky notes" or "cards."
* **Spatial Grouping:** Tasks must be groupable by physical proximity (e.g., placing notes within a "Sprint 1" or "In-Progress" frame) rather than strictly being confined to fixed database columns.
* **Rapid Creation & Cloning:** Must support instant duplication of notes (e.g., via `Cmd/Ctrl + D`) that preserves all metadata, styles, and "Issue Type" properties.
## 3. Task Attributes & Data Structure
* **Unique Numeric Priority:** The system must support a unique integer field for priorities ($1, 2, 3...$) to allow for granular sorting beyond standard "High/Medium/Low" buckets.
* **Color-Coded Topic Areas:** Task notes must be visually distinct based on their "Topic Area" or "Issue Type" (e.g., API = Blue, Frontend = Green, DevOps = Yellow).
* **Visible Estimates:** Effort estimations (points or hours) must be visible on the "front" of the card for quick scanning during planning.
## 4. Sprint Planning & Tracking Logic
* **Effort Summing:** The system must provide a mechanism (either via view-switching to a table or via an AI query) to sum the total effort of all tasks currently placed in the "Active Sprint" or "In-Progress" area.
* **Backlog Management:** Support for a "Backlog" view that can toggle between an unstructured pile of notes on the canvas and a structured, sorted list in a table view.
## 5. AI Integration (Claude Code / MCP)
* **State Awareness:** Claude Code must be able to "read" the state of the board (positions of cards, text content, and metadata).
* **Automated Summarization:** Ability to trigger a command (e.g., `/sprint-report`) that scans the visual board and generates a summary of progress and remaining capacity.
* **Compounding Context:** Every task movement or update should be recordable in a "living" markdown file system (as described in the Dave Killeen "DEX" model) to track project history and developer preferences.
## 6. Technical Constraints
* **FOSS Focus:** Preference for high-quality Free and Open Source Software (e.g., AFFiNE, Taiga, or Tldraw SDK).
* **Self-Hosted:** Must be deployable via Docker/Containers on a private Linux server.
* **No Commercial Subscriptions:** Avoid additional monthly "Per User" fees; utilize existing API keys for Gemini/OpenAI.
---
# Examples of AFFiNE Scrum taskboards
## Motivation: Visual AI-Native Scrum Taskboard
A transition is planned from a manual, static SVG-based Scrum taskboard to a dynamic, AI-supported process that preserves the spatial/visual intuition of a physical whiteboard while leveraging the structured intelligence of LLMs and MCP servers.
## Requirements
### 1. Visual UX & Spatial Organization
* **Canvas-First Workflow:** The primary interface must be a canvas (whiteboard style) where tasks are represented as "sticky notes" or "cards".
* **Spatial Grouping:** Tasks must be groupable by physical proximity (e.g., placing notes within a "Sprint 1" or "In-Progress" frame) rather than strictly being confined to fixed database columns or table rows.
* **Rapid Creation & Cloning:** Must support instant duplication of notes that preserves all metadata, styles, and "Issue Type" properties.
### 2. Task Attributes & Data Structure
* **Unique Numeric Priority:** The system must support a unique integer field for priorities ($1, 2, 3...$) to allow for granular sorting beyond standard "High/Medium/Low" buckets.
* **Color-Coded Topic Areas:** Task notes must be visually distinct based on their "Topic Area" or "Issue Type" (e.g., API = Blue, Frontend = Green, DevOps = Yellow).
* **Visible Estimates:** Effort estimations (points or hours) must be visible on the "front" of the card for quick scanning during planning.
### 3. Sprint Planning & Tracking Logic
* **Effort Summing:** The system must provide a mechanism (either via view-switching to a table or via an AI query) to sum the total effort of all tasks currently placed in the "Active Sprint" or "In-Progress" area.
* **Backlog Management:** Support for a "Backlog" view that can toggle between an unstructured pile of notes on the canvas and a structured, sorted list in a table view.
### 4. AI Integration (MCP or API)
* **State Awareness:** An AI agent must be able to "read" the state of the board (positions of cards, text content, and metadata).
* **Automated Summarization:** Ability to trigger a command that scans the visual board and generates a summary of progress and remaining capacity.
* **Compounding Context:** Task movements or updates should be recordable in a "living" file system or database to track the project history.
### 5. Technical Constraints
* **FOSS Focus:** Strong preference for high-quality Free and Open Source Software, current candidate is AFFiNE.
* **Self-Hosted:** Must be deployable via Docker/Containers on a private Linux server.
* **AI via API keys or Claude Code** AI must be utilized existing API keys for the LLM or Claude Code.
## Your task
Please search the Internet for existing examples of AFFiNE Scrum taskboard implementations which matches or are close to the requirements.
I checked out the two templates "Agile Project Planning: Sprints & Timeline Board" and "Agile Sync & Sprint Meeting Agenda Template". Both are not examples of AFFiNE Scrum taskboards! So they are not close to my requirements!
Please find examples of Scrum taskboards, which have a "Backlog" area, an area for in-progress tasks and an area for done tasks. AFFiNE shapes representing Scrum tasks need to exist and progress can be tracked by moving the shapes from one area to another area.
Don't search the templates on https://affine.pro/templates because they do not include such examples.
Better search in AFFiNE communities.

43
Search_Requests.md Normal file
View File

@@ -0,0 +1,43 @@
# Hardware details search for AI model benchmark values
## Motivation
Benchmark values for speed are provided in tokens per second for
a certain AI model but the Hardware used in the benchmark is not
described; therefore, a research is requested to find combinations
of AI model speed values and Hardware details.
## AI model
The AI model is
Qwen3 VL 30B A3B Instruct .
The model details are described at
https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct
## Benchmark speed values
An example for provided benchmark speed values is at
https://artificialanalysis.ai/models/qwen3-vl-30b-a3b-instruct/providers
There three relevant speed values are provided:
- Fireworks: 141.7 t/s
- Novita: 105.3 t/s
- Alibaba Cloud: 104.3 t/s
The fourth speed value is not relevant because it is for a quantized
version of the model (FP8, 8 bit per parameter).
## Tasks
Your tasks are:
1. Find out which Hardware was used for the three speed values of the example
and which VRAM throughput in GB/s this Hardware had.
2. If you were not able to solve 1., then search for at least two
combinations of speed values and Hardware details for the AI model
with the original model parameter size of 16 bit per parameter.
When succeeded, end here.
3. If you were not able to solve 1. and not able to solve 2., then
search for at least two
combinations of speed values and Hardware details for the AI model
with quantized model parameter size of 8 bit per parameter.