Parse the top-level `attachments` array in WeKan board JSON exports,
group them by card ID, base64-decode the payload, and attach the
resulting files to the generated tasks so they land in Vikunja as
task attachments. Orphaned attachments (cardId with no matching card)
are silently skipped; decode errors are logged and skipped.
The test previously fetched the attachment from https://vikunja.io/testimage.jpg,
which caused flaky failures in CI when the external host was unreachable
(context deadline exceeded). Serve the local testimage.jpg via httptest and
temporarily allow non-routable IPs for the SSRF-safe client so the test is
hermetic and deterministic.
Builds an in-memory export zip with a 2 MB payload and a data.json
that claims size: 0, then asserts neither the honest 2 MB row nor
the forged 0-size row ends up in the files table. Covers
GHSA-qh78-rvg3-cv54.
The hard-coded 500 MB per-entry cap meant operators who set a tighter
files.maxsize could not actually enforce it on imports. Derive the cap
from files.maxsize with a floor so data.json / filters.json / VERSION
entries can still be read when the configured limit is tiny.
Clamp the uint64->int64 conversion and the LimitReader cap so absurd
configuration values do not overflow into MinInt64 and cause
io.LimitReader to treat every entry as EOF.
Import metadata is attacker-controlled and can forge a small size to
bypass the attachment size limit (GHSA-qh78-rvg3-cv54). Compute the
size from the decoded content instead of trusting a.File.Size.
Task/project duplication and the Todoist migration were passing stored
or API-reported sizes into NewAttachment. Derive the size from the
actual buffered content so every caller matches the hardened boundary
behaviour (GHSA-qh78-rvg3-cv54 defence-in-depth).
Task titles, project titles, team names, doer/assignee names, and API
token titles were interpolated raw into Line(...) calls whose content is
rendered to HTML by goldmark and then sanitized with bluemonday UGCPolicy.
UGCPolicy intentionally allows safe <a href> and <img src> with
http/https URLs, so a title containing Markdown link or image syntax
would survive sanitization as a working phishing link or tracking pixel
in a legitimate Vikunja email.
Introduce notifications.EscapeMarkdown, which prefixes every CommonMark
§2.4 backslash-escapable ASCII punctuation character — including '<' so
autolinks like `<https://evil.com>` are neutralized before reaching
goldmark — with a backslash. Apply it to every user-controlled argument
of every Line(...) call in pkg/models that feeds into an i18n template,
and to the hand-built "* [title](url) (project)" Markdown link in the
overdue-tasks digest notification.
Also escape the migration error string in MigrationFailedNotification,
an additional sink not listed in the advisory (error messages can carry
user-controlled content from the external migration source).
Subject(...), Greeting(...), and CreateConversationalHeader(...) are
left unchanged: Subject is passed directly to the mail library and is
not markdown-rendered, Greeting is rendered via html/template's built-in
HTML escaping without markdown, and the conversational header is
sanitized as raw HTML by bluemonday in mail_render.go.
Fixes GHSA-45q4-x4r9-8fqj.
Allow users to skip the first N data rows when importing CSV files.
This is useful when the CSV contains metadata rows before the actual
task data begins. Adds skip_rows to ImportConfig (backend) and a
number input in the parsing options UI (frontend).
Add a new CSV migration module that allows users to import tasks from
any CSV file with custom column mapping and parsing options.
Backend changes:
- New CSV migrator module with detection, preview, and import endpoints
- Auto-detection of delimiter, quote character, and date format
- Suggested column mappings based on column name patterns
- Transactional import using InsertFromStructure
Frontend changes:
- New CSV migration UI with two-step flow (upload -> mapping -> import)
- Column mapping selectors for all task attributes
- Live preview showing first 5 tasks with current mapping
- Parsing option controls for delimiter and date format
The CSV migrator creates a parent "Imported from CSV" project with
child projects based on the project column if provided, or a default
"Tasks" project for tasks without a specified project.
Previously only the "To-Do" default bucket was deleted, leaving "Doing"
and "Done" as duplicates alongside migration-provided buckets. Now all
default-created buckets are removed when migration data already provides
bucket assignments for all tasks.
Add comprehensive tests for the WeKan conversion function including
edge cases (empty board, orphan cards, color mapping, multiple
checklists, unsupported fields) and a realistic JSON fixture file.
Add a file-based migration importer that reads WeKan board JSON exports
and creates Vikunja projects with kanban buckets, tasks, labels,
checklists, and comments.
WeKan lists become kanban buckets. Checklists are converted to HTML
task lists in the description. Card descriptions and comments are
converted from markdown to HTML using goldmark. Label colors are
mapped from WeKan's CSS color names to their actual hex values.
TickTick CSV exports don't guarantee parent tasks appear before their
subtasks. When a child row came first, the shared migration pipeline
tried to create a title-less placeholder for the missing parent, which
failed with 'Task title cannot be empty'.
Resolvesgo-vikunja/vikunja#2487
Add config-level exclusions for G117 (secret-named struct fields),
G101 in test files, G702/G704 in magefile, and goheader in plugins.
Add inline #nosec comments for specific G703/G704 false positives
in export, dump/restore, migration, and avatar code.
Replace io.LimitReader with a new readZipEntry helper that reads one extra
byte to detect when content exceeds maxZipEntrySize (500MB). This prevents
silent data corruption where partial file bytes would be stored as if the
upload succeeded.
The import now fails with ErrFileTooLarge instead of accepting truncated
content for attachments and background blobs.
TickTick uses status "2" (Archived) for completed tasks that were
subsequently archived. The import only checked for status "1"
(Completed), causing archived tasks to be imported as open despite
having a completion timestamp.
Closesgo-vikunja/vikunja#2278
Update all code paths that pass file content to the storage layer to
provide io.ReadSeeker instead of io.Reader:
- Avatar upload: use bytes.NewReader instead of bytes.Buffer
- Background upload handler: use bytes.NewReader instead of bytes.Buffer
- Unsplash background: buffer response body into bytes.NewReader
- Dump restore: buffer zip entry into bytes.NewReader
- Migration structure: pass bytes.NewReader directly instead of wrapping
in io.NopCloser
- Task attachment: change NewAttachment parameter from io.ReadCloser to
io.ReadSeeker
Fixes a bug where the webhook HTTP client was mutating `http.DefaultClient` (the global singleton), causing ALL HTTP requests in the application to use the webhook proxy. This broke OIDC authentication and other external HTTP calls when webhook proxy was configured.
Fixes#2144
This changes the error handling to a centralized HTTP error handler in `pkg/routes/error_handler.go` that converts all error types to proper HTTP responses. This simplifies the overall error handling because http handler now only need to return the error instead of calling HandleHTTPError as previously.
It also removes the duplication between handling errors with and without Sentry.
🐰 Hop along, dear errors, no more wrapping today!
We've centralized handlers in a shiny new way,
From scattered to unified, the code flows so clean,
ValidationHTTPError marshals JSON supreme!
Direct propagation hops forward with glee,
A refactor so grand—what a sight to see! 🎉
This change fixes a few issues with the TickTick import:
1. BOM (Byte Order Mark) Handling: Added stripBOM() function to properly handle UTF-8 BOM at the beginning of CSV files
2. Multi-line Status Section: Updated header detection to handle the multi-line status description in real TickTick exports
3. CSV Parser Configuration: Made the CSV parser more lenient with variable field counts and quote handling
4. Test Infrastructure: Added missing logger initialization for tests
5. Field Mapping: Fixed the core issue where CSV fields weren't being mapped to struct fields correctly
The main problem was in the newLineSkipDecoder function where:
- Header detection calculated line skip count on BOM-stripped content
- CSV decoder was also stripping BOM and applying the same skip count
- This caused inconsistent positioning and empty field mapping
Rewrote the decoder to use a scanner-based approach with consistent BOM handling.
Resolves https://github.com/go-vikunja/vikunja/issues/1870