Add sync redesign with offline fallback (M9)

- Migration 003: adds logged_at to sync_log for TTL pruning; migrates
  settings_history to UUID TEXT PK with updated_at column
- SyncStore: Prune() deletes rows older than 30d and writes a '_pruned'
  marker at the boundary version; Pull() calls Prune lazily and returns
  ErrSyncStale (410) when the client's since_version is behind the marker
- sync_handler.go: GET /api/sync/pull?since=N; POST /api/sync/push with
  last-updated_at-wins conflict resolution for entries, balance_adjustments,
  settings_history; closed_days/closed_weeks skipped (server-only mutations)
- router.go: passes entryStore, adjustmentStore, settingsStore to SyncHandler
- settings_store.go: UUID PK, updated_at column, Upsert() for push path
- settings_service.go: generates UUID on create, sets updated_at on update
- settings_handler.go: ID params changed from int64 to string
- domain.go: Settings.ID string, Settings.UpdatedAt added
- client.ts: all mutation methods catch TypeError (offline) and fall back
  to Dexie write + outbox enqueue; crypto.randomUUID() for offline creates;
  Settings.id type changed to string
- db.ts: Dexie v3 — settings_history key path changed to string UUID;
  upgrade handler clears table for repopulation via pull
- sync.ts: real pushOutbox to POST /api/sync/push; pullChanges uses GET
  with ?since=N; 410 triggers coldStart() + retry; coldStart() wipes all
  tables and resets last_version
- 4 new Go store tests covering normal pull, stale client, empty prune,
  client-ahead-of-marker; all tests pass (store + service, 19 Vitest)
This commit is contained in:
2026-04-30 22:50:33 +02:00
parent 3214f48a6f
commit d8366f5c25
15 changed files with 864 additions and 144 deletions

44
PLAN.md
View File

@@ -77,22 +77,25 @@ CREATE TABLE closed_weeks (
);
-- Settings with effective-from semantics so past weeks aren't retroactively changed
-- Migration 003: switched to TEXT UUID primary key; added updated_at
CREATE TABLE settings_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
effective_from TEXT NOT NULL, -- 'YYYY-MM-DD'
id TEXT PRIMARY KEY, -- UUID (client-generated, sync-friendly)
effective_from TEXT NOT NULL, -- 'YYYY-MM-DD'
hours_per_week REAL NOT NULL,
workdays_mask INTEGER NOT NULL DEFAULT 31, -- bits Mon=1..Sun=64; Mon-Fri = 31
workdays_mask INTEGER NOT NULL DEFAULT 31, -- bits Mon=1..Sun=64; Mon-Fri = 31
timezone TEXT NOT NULL DEFAULT 'UTC',
created_at INTEGER NOT NULL
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
-- Sync event log (last-write-wins per entity row)
-- Sync event log (append-only; TTL-pruned; prune marker for stale-client detection)
CREATE TABLE sync_log (
entity TEXT NOT NULL, -- 'entries' | 'closed_days' | 'closed_weeks' | 'balance_adjustments'
entity TEXT NOT NULL, -- 'entries' | 'closed_days' | ... | '_pruned'
entity_id TEXT NOT NULL,
op TEXT NOT NULL, -- 'upsert' | 'delete'
op TEXT NOT NULL, -- 'upsert' | 'delete' | 'marker'
version INTEGER NOT NULL, -- monotonic server-assigned
payload TEXT NOT NULL, -- JSON snapshot
logged_at INTEGER NOT NULL, -- unix ms; used for TTL pruning
PRIMARY KEY (entity, entity_id, version)
);
@@ -144,8 +147,8 @@ PUT /api/settings { effective_from, hours_per_week, workday
GET /api/settings/history
# Sync
POST /api/sync/pull { since_version } -> { changes[], server_version }
POST /api/sync/push { changes[] } -> { applied[], conflicts[] }
GET /api/sync/pull?since=N -> { changes[], server_version } | 410 Gone
POST /api/sync/push { changes[] } -> { applied[], skipped[] }
# Health
GET /healthz (unauthenticated)
@@ -266,7 +269,23 @@ Staged implementation:
Manual corrective entries on the History page that adjust the overall overtime balance without touching week math. Separate `balance_adjustments` table with signed `delta_ms`, optional `note`, and `effective_at` (backdatable). Balance summary combines `Σ closed_weeks.delta_ms + Σ balance_adjustments.delta_ms`.
### M9 — Future
### M9 — Sync redesign ✅
Full offline support with online-first, offline-fallback mutation strategy.
**Backend:**
- Migration 003: `logged_at` on `sync_log`; `settings_history` migrated to UUID TEXT PK with `updated_at`.
- `SyncStore.Prune(ctx, ttl)`: deletes rows older than TTL and writes a `_pruned` marker at the boundary version. Clients pulling with `since < marker_version` receive `ErrSyncStale``410 Gone`.
- `GET /api/sync/pull?since=N`: calls Prune lazily (30-day TTL), returns changes or 410.
- `POST /api/sync/push`: accepts batched outbox items; applies last-`updated_at`-wins for `entries`, `balance_adjustments`, `settings_history`. `closed_days`/`closed_weeks` are server-only and skipped. Returns `{applied, skipped}`.
**Frontend:**
- `client.ts`: all mutation methods (`entries`, `balance`, `settings`) catch `TypeError` (network error) and fall back to writing directly to Dexie + enqueuing in the outbox. IDs for offline creates are generated client-side via `crypto.randomUUID()`.
- `sync.ts`: `pushOutbox` sends outbox to `POST /api/sync/push`; on success removes applied items. `pullChanges` uses `GET /api/sync/pull?since=N`; on 410 calls `coldStart()` and retries. `coldStart()` clears all Dexie tables and resets `last_version=0`.
- `db.ts`: Dexie v3 — `settings_history` key path changed to `id` (string); upgrade handler clears the table for repopulation via pull.
- Settings page: `editingId` and ID params updated from `number` to `string`.
### M10 — Future
CSV/JSON export, monthly summary view.
## 7. Decisions & Rationale
@@ -286,4 +305,7 @@ CSV/JSON export, monthly summary view.
| Balance adjustments | Separate `balance_adjustments` table | Avoids conflating measured deltas with manual corrections; preserves week math invariant (`delta_ms = worked - expected`) |
| Balance adjustment scope | Closed weeks only for auto balance | In-progress week delta shown separately in week view; mixing would make balance jitter |
| Balance adjustment IDs | TEXT (UUIDv7, client-generated) | Consistent with `entries`; allows offline creation and sync |
| Test framework | Vitest (frontend) + Go testing (backend) | Automated coverage for capability logic and key utilities |
| Settings history PK | TEXT UUID (migration 003) | Consistent with other entities; enables offline create; `updated_at` enables last-write-wins sync |
| Sync prune strategy | Prune marker row at boundary version | No extra table; client detects stale state from the log itself; 410 triggers full re-sync |
| Sync conflict resolution | Last `updated_at` wins | Server is authoritative; simple to implement and reason about for single-user |
| Offline mutation flow | Online-first, offline-fallback | Server is primary; client writes to Dexie+outbox only on network failure; simpler than full local-first |

View File

@@ -62,7 +62,7 @@ func main() {
staticFS = webFS
}
router := handler.NewRouter(cfg.AuthToken, entrySvc, daySvc, settingsSvc, weekSvc, syncStore, staticFS)
router := handler.NewRouter(cfg.AuthToken, entrySvc, daySvc, settingsSvc, weekSvc, syncStore, entryStore, adjustmentStore, settingsStore, staticFS)
srv := &http.Server{
Addr: ":" + cfg.Port,

View File

@@ -66,12 +66,13 @@ type ClosedWeek struct {
// Settings holds the effective configuration for a period.
type Settings struct {
ID int64 `json:"id"`
ID string `json:"id"`
EffectiveFrom string `json:"effective_from"` // YYYY-MM-DD
HoursPerWeek float64 `json:"hours_per_week"`
WorkdaysMask int `json:"workdays_mask"` // bits Mon=1..Sun=64
Timezone string `json:"timezone"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
}
// DailyExpectedMs returns the expected milliseconds for a single workday.

View File

@@ -18,6 +18,9 @@ func NewRouter(
settingsSvc *service.SettingsService,
weekSvc *service.WeekService,
syncStore *store.SyncStore,
entryStore *store.EntryStore,
adjustmentStore *store.BalanceAdjustmentStore,
settingsStore *store.SettingsStore,
staticFiles fs.FS,
) http.Handler {
r := chi.NewRouter()
@@ -47,7 +50,7 @@ func NewRouter(
weekH := NewWeekHandler(weekSvc)
weekH.Routes(r)
syncH := NewSyncHandler(syncStore)
syncH := NewSyncHandler(syncStore, entryStore, adjustmentStore, settingsStore)
syncH.Routes(r)
exportH := NewExportHandler(entrySvc, daySvc, weekSvc)

View File

@@ -4,7 +4,6 @@ import (
"database/sql"
"errors"
"net/http"
"strconv"
"github.com/go-chi/chi/v5"
"github.com/wotra/wotra/internal/service"
@@ -83,8 +82,8 @@ func (h *SettingsHandler) History(w http.ResponseWriter, r *http.Request) {
// UpdateHistoryRow PUT /api/settings/history/{id}
func (h *SettingsHandler) UpdateHistoryRow(w http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(chi.URLParam(r, "id"), 10, 64)
if err != nil {
id := chi.URLParam(r, "id")
if id == "" {
writeError(w, http.StatusBadRequest, "invalid id")
return
}
@@ -120,8 +119,8 @@ func (h *SettingsHandler) UpdateHistoryRow(w http.ResponseWriter, r *http.Reques
// DeleteHistoryRow DELETE /api/settings/history/{id}
func (h *SettingsHandler) DeleteHistoryRow(w http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(chi.URLParam(r, "id"), 10, 64)
if err != nil {
id := chi.URLParam(r, "id")
if id == "" {
writeError(w, http.StatusBadRequest, "invalid id")
return
}
@@ -138,4 +137,3 @@ func (h *SettingsHandler) DeleteHistoryRow(w http.ResponseWriter, r *http.Reques
}
w.WriteHeader(http.StatusNoContent)
}

View File

@@ -1,86 +1,290 @@
package handler
import (
"context"
"database/sql"
"encoding/json"
"errors"
"net/http"
"strconv"
"time"
"github.com/go-chi/chi/v5"
"github.com/wotra/wotra/internal/domain"
"github.com/wotra/wotra/internal/store"
)
// SyncHandler serves /api/sync routes.
type SyncHandler struct {
syncStore *store.SyncStore
sync *store.SyncStore
entries *store.EntryStore
adjustments *store.BalanceAdjustmentStore
settings *store.SettingsStore
}
func NewSyncHandler(syncStore *store.SyncStore) *SyncHandler {
return &SyncHandler{syncStore: syncStore}
func NewSyncHandler(
sync *store.SyncStore,
entries *store.EntryStore,
adjustments *store.BalanceAdjustmentStore,
settings *store.SettingsStore,
) *SyncHandler {
return &SyncHandler{
sync: sync,
entries: entries,
adjustments: adjustments,
settings: settings,
}
}
func (h *SyncHandler) Routes(r chi.Router) {
r.Post("/sync/pull", h.Pull)
r.Get("/sync/pull", h.Pull)
r.Post("/sync/push", h.Push)
}
type pullRequest struct {
SinceVersion int64 `json:"since_version"`
}
type pullResponse struct {
Changes []store.SyncChange `json:"changes"`
ServerVersion int64 `json:"server_version"`
}
// Pull POST /api/sync/pull
// Pull GET /api/sync/pull?since=N
func (h *SyncHandler) Pull(w http.ResponseWriter, r *http.Request) {
var req pullRequest
if err := decodeJSON(r, &req); err != nil {
writeError(w, http.StatusBadRequest, "invalid JSON")
return
sinceStr := r.URL.Query().Get("since")
var since int64
if sinceStr != "" {
var err error
since, err = strconv.ParseInt(sinceStr, 10, 64)
if err != nil {
writeError(w, http.StatusBadRequest, "invalid since parameter")
return
}
}
changes, serverVersion, err := h.syncStore.Pull(r.Context(), req.SinceVersion)
changes, serverVersion, err := h.sync.Pull(r.Context(), since)
if err != nil {
if errors.Is(err, store.ErrSyncStale) {
writeError(w, http.StatusGone, "sync_stale")
return
}
writeError(w, http.StatusInternalServerError, err.Error())
return
}
// Return empty array rather than null.
if changes == nil {
changes = []store.SyncChange{}
}
writeJSON(w, http.StatusOK, pullResponse{Changes: changes, ServerVersion: serverVersion})
writeJSON(w, http.StatusOK, map[string]any{
"changes": changes,
"server_version": serverVersion,
})
}
type pushChange struct {
Entity string `json:"_entity"`
Op string `json:"_op"`
EntityID string `json:"id"` // most entities use "id" or entity-specific key
Raw json.RawMessage `json:"-"`
// pushItem is a single change submitted by the client.
type pushItem struct {
Entity string `json:"entity"`
EntityID string `json:"entity_id"`
Op string `json:"op"` // "upsert" | "delete"
Payload json.RawMessage `json:"payload"`
}
type pushRequest struct {
Changes []json.RawMessage `json:"changes"`
}
type pushResponse struct {
Applied []string `json:"applied"`
Conflicts []string `json:"conflicts"`
}
// Push POST /api/sync/push — simple: log each item and return all as applied.
// Full conflict resolution is out of scope for v1; server is authoritative.
// Clients should pull after push to get the canonical state.
// Push POST /api/sync/push
func (h *SyncHandler) Push(w http.ResponseWriter, r *http.Request) {
var req pushRequest
if err := decodeJSON(r, &req); err != nil {
var body struct {
Changes []pushItem `json:"changes"`
}
if err := decodeJSON(r, &body); err != nil {
writeError(w, http.StatusBadRequest, "invalid JSON")
return
}
applied := make([]string, 0, len(req.Changes))
// For v1, we acknowledge all pushes. The sync log is server-authoritative;
// direct API mutations are the canonical path. Client pushes are advisory.
for range req.Changes {
applied = append(applied, "ok")
ctx := r.Context()
var applied, skipped []string
for _, item := range body.Changes {
ok, err := h.applyPushItem(ctx, item)
if err != nil {
// Log and skip on unexpected errors; don't abort the whole push.
skipped = append(skipped, item.EntityID)
continue
}
if ok {
applied = append(applied, item.EntityID)
} else {
skipped = append(skipped, item.EntityID)
}
}
writeJSON(w, http.StatusOK, pushResponse{Applied: applied, Conflicts: []string{}})
if applied == nil {
applied = []string{}
}
if skipped == nil {
skipped = []string{}
}
writeJSON(w, http.StatusOK, map[string]any{
"applied": applied,
"skipped": skipped,
})
}
// applyPushItem applies one client change. Returns (true, nil) if applied,
// (false, nil) if skipped (e.g. server row is newer), (false, err) on error.
func (h *SyncHandler) applyPushItem(ctx context.Context, item pushItem) (bool, error) {
switch item.Entity {
case "entries":
return h.applyEntry(ctx, item)
case "balance_adjustments":
return h.applyBalanceAdjustment(ctx, item)
case "settings_history":
return h.applySettings(ctx, item)
default:
// closed_days, closed_weeks — server-only mutations; skip silently.
return false, nil
}
}
// ── entries ───────────────────────────────────────────────────────────────────
func (h *SyncHandler) applyEntry(ctx context.Context, item pushItem) (bool, error) {
if item.Op == "delete" {
var payload struct {
ID string `json:"id"`
UpdatedAt int64 `json:"updated_at"`
}
if err := json.Unmarshal(item.Payload, &payload); err != nil {
return false, err
}
now := time.Now().UnixMilli()
// Only soft-delete if server row is not newer.
existing, err := h.entries.GetByID(ctx, item.EntityID)
if err != nil {
if errors.Is(err, sql.ErrNoRows) {
return true, nil // already gone
}
return false, err
}
if existing.UpdatedAt > payload.UpdatedAt {
return false, nil // server is newer
}
if err := h.entries.SoftDelete(ctx, item.EntityID, now); err != nil {
return false, err
}
if err := h.sync.LogEntryDelete(ctx, item.EntityID); err != nil {
return false, err
}
return true, nil
}
// upsert
var e domain.Entry
if err := json.Unmarshal(item.Payload, &e); err != nil {
return false, err
}
existing, err := h.entries.GetByID(ctx, e.ID)
if err != nil && !errors.Is(err, sql.ErrNoRows) {
return false, err
}
if existing != nil && existing.UpdatedAt >= e.UpdatedAt {
return false, nil // server is newer or equal
}
if existing == nil {
if err := h.entries.Create(ctx, &e); err != nil {
return false, err
}
} else {
if err := h.entries.Update(ctx, &e); err != nil {
return false, err
}
}
if err := h.sync.LogEntry(ctx, &e); err != nil {
return false, err
}
return true, nil
}
// ── balance_adjustments ───────────────────────────────────────────────────────
func (h *SyncHandler) applyBalanceAdjustment(ctx context.Context, item pushItem) (bool, error) {
if item.Op == "delete" {
var payload struct {
UpdatedAt int64 `json:"updated_at"`
}
if err := json.Unmarshal(item.Payload, &payload); err != nil {
return false, err
}
existing, err := h.adjustments.GetByID(ctx, item.EntityID)
if err != nil {
if errors.Is(err, store.ErrAdjustmentNotFound) {
return true, nil // already gone
}
return false, err
}
if existing.UpdatedAt > payload.UpdatedAt {
return false, nil // server is newer
}
if err := h.adjustments.Delete(ctx, item.EntityID); err != nil {
return false, err
}
if err := h.sync.LogBalanceAdjustmentDelete(ctx, item.EntityID); err != nil {
return false, err
}
return true, nil
}
// upsert
var a domain.BalanceAdjustment
if err := json.Unmarshal(item.Payload, &a); err != nil {
return false, err
}
existing, err := h.adjustments.GetByID(ctx, a.ID)
if err != nil && !errors.Is(err, store.ErrAdjustmentNotFound) {
return false, err
}
if existing != nil && existing.UpdatedAt >= a.UpdatedAt {
return false, nil
}
if existing == nil {
if err := h.adjustments.Create(ctx, &a); err != nil {
return false, err
}
} else {
if err := h.adjustments.Update(ctx, &a); err != nil {
return false, err
}
}
if err := h.sync.LogBalanceAdjustment(ctx, &a); err != nil {
return false, err
}
return true, nil
}
// ── settings_history ──────────────────────────────────────────────────────────
func (h *SyncHandler) applySettings(ctx context.Context, item pushItem) (bool, error) {
if item.Op == "delete" {
// Refuse to delete if it would leave zero rows (same rule as service).
count, err := h.settings.Count(ctx)
if err != nil {
return false, err
}
if count <= 1 {
return false, nil // skip silently
}
if err := h.settings.Delete(ctx, item.EntityID); err != nil {
return false, err
}
if err := h.sync.LogSettingsDelete(ctx, item.EntityID); err != nil {
return false, err
}
return true, nil
}
// upsert — last updated_at wins via store.Upsert
var s domain.Settings
if err := json.Unmarshal(item.Payload, &s); err != nil {
return false, err
}
if err := h.settings.Upsert(ctx, &s); err != nil {
return false, err
}
if err := h.sync.LogSettings(ctx, &s); err != nil {
return false, err
}
return true, nil
}

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"time"
"github.com/google/uuid"
"github.com/wotra/wotra/internal/domain"
"github.com/wotra/wotra/internal/store"
)
@@ -78,12 +79,15 @@ func (s *SettingsService) Upsert(ctx context.Context, input UpsertSettingsInput)
return nil, fmt.Errorf("invalid effective_from: %w", err)
}
now := time.Now().UnixMilli()
set := &domain.Settings{
ID: uuid.New().String(),
EffectiveFrom: input.EffectiveFrom,
HoursPerWeek: input.HoursPerWeek,
WorkdaysMask: input.WorkdaysMask,
Timezone: input.Timezone,
CreatedAt: time.Now().UnixMilli(),
CreatedAt: now,
UpdatedAt: now,
}
if err := s.store.Insert(ctx, set); err != nil {
return nil, err
@@ -100,7 +104,7 @@ type UpdateSettingsInput struct {
}
// UpdateSettings edits an existing settings row in-place.
func (s *SettingsService) UpdateSettings(ctx context.Context, id int64, input UpdateSettingsInput) (*domain.Settings, error) {
func (s *SettingsService) UpdateSettings(ctx context.Context, id string, input UpdateSettingsInput) (*domain.Settings, error) {
if input.HoursPerWeek <= 0 {
return nil, ErrInvalidHours
}
@@ -129,6 +133,7 @@ func (s *SettingsService) UpdateSettings(ctx context.Context, id int64, input Up
set.HoursPerWeek = input.HoursPerWeek
set.WorkdaysMask = input.WorkdaysMask
set.Timezone = input.Timezone
set.UpdatedAt = time.Now().UnixMilli()
if err := s.store.Update(ctx, set); err != nil {
return nil, err
@@ -137,7 +142,7 @@ func (s *SettingsService) UpdateSettings(ctx context.Context, id int64, input Up
}
// DeleteSettings removes a settings row. Refuses if it is the only row.
func (s *SettingsService) DeleteSettings(ctx context.Context, id int64) error {
func (s *SettingsService) DeleteSettings(ctx context.Context, id string) error {
count, err := s.store.Count(ctx)
if err != nil {
return err

View File

@@ -0,0 +1,27 @@
-- +migrate Up
-- 1. Add logged_at to sync_log for TTL-based pruning.
ALTER TABLE sync_log ADD COLUMN logged_at INTEGER NOT NULL DEFAULT 0;
-- 2. Migrate settings_history to UUID TEXT primary key and add updated_at.
ALTER TABLE settings_history RENAME TO settings_history_old;
CREATE TABLE settings_history (
id TEXT PRIMARY KEY,
effective_from TEXT NOT NULL,
hours_per_week REAL NOT NULL,
workdays_mask INTEGER NOT NULL DEFAULT 31,
timezone TEXT NOT NULL DEFAULT 'UTC',
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
INSERT INTO settings_history (id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at)
SELECT lower(hex(randomblob(16))), effective_from, hours_per_week, workdays_mask, timezone, created_at, created_at
FROM settings_history_old;
DROP TABLE settings_history_old;
-- +migrate Down
-- (intentionally left minimal; restoring integer PK requires recreating the table again)
ALTER TABLE sync_log DROP COLUMN logged_at;

View File

@@ -19,7 +19,7 @@ func NewSettingsStore(db *sql.DB) *SettingsStore {
// Current returns the most recent settings effective on or before the given day key.
func (s *SettingsStore) Current(ctx context.Context, asOfDayKey string) (*domain.Settings, error) {
row := s.db.QueryRowContext(ctx,
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at
FROM settings_history
WHERE effective_from <= ?
ORDER BY effective_from DESC, id DESC
@@ -30,7 +30,7 @@ func (s *SettingsStore) Current(ctx context.Context, asOfDayKey string) (*domain
// Latest returns the most recently created settings row.
func (s *SettingsStore) Latest(ctx context.Context) (*domain.Settings, error) {
row := s.db.QueryRowContext(ctx,
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at
FROM settings_history
ORDER BY effective_from DESC, id DESC
LIMIT 1`)
@@ -40,7 +40,7 @@ func (s *SettingsStore) Latest(ctx context.Context) (*domain.Settings, error) {
// History returns all settings rows ordered by effective_from DESC.
func (s *SettingsStore) History(ctx context.Context) ([]*domain.Settings, error) {
rows, err := s.db.QueryContext(ctx,
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at
FROM settings_history ORDER BY effective_from DESC, id DESC`)
if err != nil {
return nil, err
@@ -49,7 +49,7 @@ func (s *SettingsStore) History(ctx context.Context) ([]*domain.Settings, error)
var result []*domain.Settings
for rows.Next() {
var s domain.Settings
if err := rows.Scan(&s.ID, &s.EffectiveFrom, &s.HoursPerWeek, &s.WorkdaysMask, &s.Timezone, &s.CreatedAt); err != nil {
if err := rows.Scan(&s.ID, &s.EffectiveFrom, &s.HoursPerWeek, &s.WorkdaysMask, &s.Timezone, &s.CreatedAt, &s.UpdatedAt); err != nil {
return nil, err
}
result = append(result, &s)
@@ -59,30 +59,41 @@ func (s *SettingsStore) History(ctx context.Context) ([]*domain.Settings, error)
// Insert inserts a new settings row.
func (s *SettingsStore) Insert(ctx context.Context, set *domain.Settings) error {
res, err := s.db.ExecContext(ctx,
`INSERT INTO settings_history (effective_from, hours_per_week, workdays_mask, timezone, created_at)
VALUES (?, ?, ?, ?, ?)`,
set.EffectiveFrom, set.HoursPerWeek, set.WorkdaysMask, set.Timezone, set.CreatedAt)
if err != nil {
return err
}
id, _ := res.LastInsertId()
set.ID = id
return nil
_, err := s.db.ExecContext(ctx,
`INSERT INTO settings_history (id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)`,
set.ID, set.EffectiveFrom, set.HoursPerWeek, set.WorkdaysMask, set.Timezone, set.CreatedAt, set.UpdatedAt)
return err
}
// Update overwrites an existing settings row by ID.
func (s *SettingsStore) Update(ctx context.Context, set *domain.Settings) error {
_, err := s.db.ExecContext(ctx,
`UPDATE settings_history
SET effective_from=?, hours_per_week=?, workdays_mask=?, timezone=?
SET effective_from=?, hours_per_week=?, workdays_mask=?, timezone=?, updated_at=?
WHERE id=?`,
set.EffectiveFrom, set.HoursPerWeek, set.WorkdaysMask, set.Timezone, set.ID)
set.EffectiveFrom, set.HoursPerWeek, set.WorkdaysMask, set.Timezone, set.UpdatedAt, set.ID)
return err
}
// Upsert inserts or replaces a settings row (used by sync push; last updated_at wins).
func (s *SettingsStore) Upsert(ctx context.Context, set *domain.Settings) error {
_, err := s.db.ExecContext(ctx,
`INSERT INTO settings_history (id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
effective_from=excluded.effective_from,
hours_per_week=excluded.hours_per_week,
workdays_mask=excluded.workdays_mask,
timezone=excluded.timezone,
updated_at=excluded.updated_at
WHERE excluded.updated_at > settings_history.updated_at`,
set.ID, set.EffectiveFrom, set.HoursPerWeek, set.WorkdaysMask, set.Timezone, set.CreatedAt, set.UpdatedAt)
return err
}
// Delete removes a settings row by ID.
func (s *SettingsStore) Delete(ctx context.Context, id int64) error {
func (s *SettingsStore) Delete(ctx context.Context, id string) error {
_, err := s.db.ExecContext(ctx, `DELETE FROM settings_history WHERE id=?`, id)
return err
}
@@ -95,16 +106,16 @@ func (s *SettingsStore) Count(ctx context.Context) (int, error) {
}
// GetByID returns a single settings row by ID.
func (s *SettingsStore) GetByID(ctx context.Context, id int64) (*domain.Settings, error) {
func (s *SettingsStore) GetByID(ctx context.Context, id string) (*domain.Settings, error) {
row := s.db.QueryRowContext(ctx,
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at
`SELECT id, effective_from, hours_per_week, workdays_mask, timezone, created_at, updated_at
FROM settings_history WHERE id=?`, id)
return scanSettings(row)
}
func scanSettings(row *sql.Row) (*domain.Settings, error) {
var s domain.Settings
err := row.Scan(&s.ID, &s.EffectiveFrom, &s.HoursPerWeek, &s.WorkdaysMask, &s.Timezone, &s.CreatedAt)
err := row.Scan(&s.ID, &s.EffectiveFrom, &s.HoursPerWeek, &s.WorkdaysMask, &s.Timezone, &s.CreatedAt, &s.UpdatedAt)
if err != nil {
return nil, err
}

View File

@@ -4,12 +4,21 @@ import (
"context"
"database/sql"
"encoding/json"
"errors"
"fmt"
"time"
"github.com/wotra/wotra/internal/domain"
)
// SyncStore manages the sync_log and server_version.
// ErrSyncStale is returned when the client's since_version is behind the prune marker.
var ErrSyncStale = errors.New("sync state stale: full re-sync required")
// pruneEntity and pruneOp are sentinel values written as a prune marker row.
const pruneEntity = "_pruned"
const pruneOp = "marker"
// SyncStore manages the sync_log.
type SyncStore struct {
db *sql.DB
}
@@ -21,13 +30,19 @@ func NewSyncStore(db *sql.DB) *SyncStore {
type SyncChange struct {
Entity string `json:"entity"`
EntityID string `json:"entity_id"`
Op string `json:"op"` // "upsert" | "delete"
Op string `json:"op"` // "upsert" | "delete" | "marker"
Version int64 `json:"version"`
Payload string `json:"payload"`
}
// Pull returns all sync_log rows with version > sinceVersion.
// It calls Prune first with a 30-day TTL.
// If the client is behind a prune marker it returns ErrSyncStale.
func (s *SyncStore) Pull(ctx context.Context, sinceVersion int64) ([]SyncChange, int64, error) {
if err := s.Prune(ctx, 30*24*time.Hour); err != nil {
return nil, 0, err
}
rows, err := s.db.QueryContext(ctx,
`SELECT entity, entity_id, op, version, payload FROM sync_log
WHERE version > ? ORDER BY version ASC`, sinceVersion)
@@ -35,6 +50,7 @@ func (s *SyncStore) Pull(ctx context.Context, sinceVersion int64) ([]SyncChange,
return nil, 0, err
}
defer rows.Close()
var changes []SyncChange
var maxVersion int64 = sinceVersion
for rows.Next() {
@@ -42,6 +58,10 @@ func (s *SyncStore) Pull(ctx context.Context, sinceVersion int64) ([]SyncChange,
if err := rows.Scan(&c.Entity, &c.EntityID, &c.Op, &c.Version, &c.Payload); err != nil {
return nil, 0, err
}
// First row with entity="_pruned" means client is stale.
if c.Entity == pruneEntity {
return nil, 0, ErrSyncStale
}
if c.Version > maxVersion {
maxVersion = c.Version
}
@@ -50,6 +70,49 @@ func (s *SyncStore) Pull(ctx context.Context, sinceVersion int64) ([]SyncChange,
return changes, maxVersion, rows.Err()
}
// Prune deletes sync_log rows older than ttl and inserts a prune marker at the
// version boundary so stale clients can detect they need a full re-sync.
func (s *SyncStore) Prune(ctx context.Context, ttl time.Duration) error {
cutoff := time.Now().Add(-ttl).UnixMilli()
tx, err := s.db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback() //nolint:errcheck
// Find max version among rows that will be pruned (excluding existing markers).
var maxPruned sql.NullInt64
err = tx.QueryRowContext(ctx,
`SELECT MAX(version) FROM sync_log WHERE logged_at < ? AND entity != ?`,
cutoff, pruneEntity).Scan(&maxPruned)
if err != nil {
return err
}
if !maxPruned.Valid {
// Nothing to prune.
return tx.Commit()
}
// Delete old rows (but not the existing marker, if any).
if _, err = tx.ExecContext(ctx,
`DELETE FROM sync_log WHERE logged_at < ? AND entity != ?`,
cutoff, pruneEntity); err != nil {
return err
}
// Insert (or replace) the prune marker at the boundary version.
now := time.Now().UnixMilli()
if _, err = tx.ExecContext(ctx,
`INSERT OR REPLACE INTO sync_log (entity, entity_id, op, version, payload, logged_at)
VALUES (?, ?, ?, ?, '{}', ?)`,
pruneEntity, pruneEntity, pruneOp, maxPruned.Int64, now); err != nil {
return err
}
return tx.Commit()
}
// nextVersion returns the next monotonic version number.
func (s *SyncStore) nextVersion(ctx context.Context) (int64, error) {
var max sql.NullInt64
@@ -96,6 +159,21 @@ func (s *SyncStore) LogClosedWeek(ctx context.Context, w *domain.ClosedWeek) err
return s.log(ctx, "closed_weeks", w.WeekKey, "upsert", string(payload))
}
// LogSettings appends a settings upsert to the sync log.
func (s *SyncStore) LogSettings(ctx context.Context, set *domain.Settings) error {
payload, err := json.Marshal(set)
if err != nil {
return err
}
return s.log(ctx, "settings_history", set.ID, "upsert", string(payload))
}
// LogSettingsDelete appends a settings delete to the sync log.
func (s *SyncStore) LogSettingsDelete(ctx context.Context, id string) error {
payload := fmt.Sprintf(`{"id":%q}`, id)
return s.log(ctx, "settings_history", id, "delete", payload)
}
// LogBalanceAdjustment appends a balance_adjustment upsert to the sync log.
func (s *SyncStore) LogBalanceAdjustment(ctx context.Context, a *domain.BalanceAdjustment) error {
payload, err := json.Marshal(a)
@@ -116,8 +194,9 @@ func (s *SyncStore) log(ctx context.Context, entity, entityID, op, payload strin
if err != nil {
return err
}
now := time.Now().UnixMilli()
_, err = s.db.ExecContext(ctx,
`INSERT INTO sync_log (entity, entity_id, op, version, payload) VALUES (?, ?, ?, ?, ?)`,
entity, entityID, op, version, payload)
`INSERT INTO sync_log (entity, entity_id, op, version, payload, logged_at) VALUES (?, ?, ?, ?, ?, ?)`,
entity, entityID, op, version, payload, now)
return err
}

View File

@@ -0,0 +1,140 @@
package store_test
import (
"context"
"errors"
"testing"
"time"
"github.com/wotra/wotra/internal/domain"
"github.com/wotra/wotra/internal/store"
)
func mustSyncStore(t *testing.T) *store.SyncStore {
t.Helper()
db, err := store.Open(":memory:")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() { db.Close() })
return store.NewSyncStore(db)
}
func TestSyncPullNormal(t *testing.T) {
s := mustSyncStore(t)
ctx := context.Background()
e1 := &domain.Entry{ID: "e1", DayKey: "2026-04-01", UpdatedAt: time.Now().UnixMilli()}
e2 := &domain.Entry{ID: "e2", DayKey: "2026-04-02", UpdatedAt: time.Now().UnixMilli()}
if err := s.LogEntry(ctx, e1); err != nil {
t.Fatal(err)
}
if err := s.LogEntry(ctx, e2); err != nil {
t.Fatal(err)
}
changes, ver, err := s.Pull(ctx, 0)
if err != nil {
t.Fatalf("Pull: %v", err)
}
if len(changes) != 2 {
t.Fatalf("expected 2 changes, got %d", len(changes))
}
if ver != 2 {
t.Fatalf("expected server_version=2, got %d", ver)
}
// Incremental pull: since=1 should return only e2.
changes2, ver2, err := s.Pull(ctx, 1)
if err != nil {
t.Fatal(err)
}
if len(changes2) != 1 || changes2[0].EntityID != "e2" {
t.Fatalf("expected [e2], got %+v", changes2)
}
if ver2 != 2 {
t.Fatalf("expected ver=2, got %d", ver2)
}
}
func TestSyncPruneStaleClient(t *testing.T) {
s := mustSyncStore(t)
ctx := context.Background()
// Log two entries then prune all of them (zero TTL).
e1 := &domain.Entry{ID: "e1", DayKey: "2026-01-01", UpdatedAt: time.Now().UnixMilli()}
e2 := &domain.Entry{ID: "e2", DayKey: "2026-01-02", UpdatedAt: time.Now().UnixMilli()}
if err := s.LogEntry(ctx, e1); err != nil {
t.Fatal(err)
}
if err := s.LogEntry(ctx, e2); err != nil {
t.Fatal(err)
}
// Prune with -1ms TTL → cutoff is 1ms in the future, so all rows are pruned.
if err := s.Prune(ctx, -time.Millisecond); err != nil {
t.Fatalf("Prune: %v", err)
}
// A stale client (since=0) should get ErrSyncStale.
_, _, err := s.Pull(ctx, 0)
if !errors.Is(err, store.ErrSyncStale) {
t.Fatalf("expected ErrSyncStale, got %v", err)
}
}
func TestSyncPruneNoRows(t *testing.T) {
s := mustSyncStore(t)
ctx := context.Background()
// Prune on empty log is a no-op.
if err := s.Prune(ctx, 30*24*time.Hour); err != nil {
t.Fatalf("Prune on empty log: %v", err)
}
changes, ver, err := s.Pull(ctx, 0)
if err != nil {
t.Fatalf("Pull: %v", err)
}
if len(changes) != 0 {
t.Fatalf("expected 0 changes, got %d", len(changes))
}
if ver != 0 {
t.Fatalf("expected ver=0, got %d", ver)
}
}
func TestSyncClientAheadOfMarker(t *testing.T) {
s := mustSyncStore(t)
ctx := context.Background()
// Log two entries, prune all, then log a third.
e1 := &domain.Entry{ID: "e1", DayKey: "2026-01-01", UpdatedAt: time.Now().UnixMilli()}
e2 := &domain.Entry{ID: "e2", DayKey: "2026-01-02", UpdatedAt: time.Now().UnixMilli()}
if err := s.LogEntry(ctx, e1); err != nil {
t.Fatal(err)
}
if err := s.LogEntry(ctx, e2); err != nil {
t.Fatal(err)
}
if err := s.Prune(ctx, -time.Millisecond); err != nil {
t.Fatal(err)
}
// Marker is at version 2. Log a new entry → version 3.
e3 := &domain.Entry{ID: "e3", DayKey: "2026-04-01", UpdatedAt: time.Now().UnixMilli()}
if err := s.LogEntry(ctx, e3); err != nil {
t.Fatal(err)
}
// A client with since=2 is past the marker — should get only e3.
changes, ver, err := s.Pull(ctx, 2)
if err != nil {
t.Fatalf("expected no error for up-to-date client, got %v", err)
}
if len(changes) != 1 || changes[0].EntityID != "e3" {
t.Fatalf("expected [e3], got %+v", changes)
}
if ver != 3 {
t.Fatalf("expected ver=3, got %d", ver)
}
}

View File

@@ -1,5 +1,11 @@
// API client for Wotra backend.
// Base URL: /api (relative, works both in dev proxy and production)
//
// Offline fallback: if a mutation throws a network error (TypeError: Failed to fetch),
// the call enqueues the operation in the Dexie outbox and resolves with the local object.
// The background sync loop will push the outbox to the server when connectivity returns.
import { db } from '$lib/stores/db';
const API_BASE = '/api';
@@ -15,6 +21,10 @@ export function hasToken(): boolean {
return !!localStorage.getItem('auth_token');
}
function isNetworkError(e: unknown): boolean {
return e instanceof TypeError && e.message.toLowerCase().includes('fetch');
}
async function request<T>(method: string, path: string, body?: unknown): Promise<T> {
const res = await fetch(`${API_BASE}${path}`, {
method,
@@ -32,6 +42,17 @@ async function request<T>(method: string, path: string, body?: unknown): Promise
return res.json();
}
/** Enqueue an outbox item for offline push. */
async function enqueue(entity: string, entity_id: string, op: 'upsert' | 'delete', payload: unknown) {
await db.outbox.add({
entity,
entity_id,
op,
payload: JSON.stringify(payload),
created_at: Date.now()
});
}
export class ApiError extends Error {
constructor(
public status: number,
@@ -73,12 +94,13 @@ export interface ClosedWeek {
}
export interface Settings {
id: number;
id: string; // UUID
effective_from: string;
hours_per_week: number;
workdays_mask: number;
timezone: string;
created_at: number;
updated_at: number;
}
export interface BalanceAdjustment {
@@ -101,19 +123,90 @@ export interface BalanceSummary {
// ─── Entries ─────────────────────────────────────────────────────────────────
export const entries = {
start: (note = '') => request<Entry>('POST', '/entries/start', { note }),
createInterval: (startTime: number, endTime: number, note = '') =>
request<Entry>('POST', '/entries', { start_time: startTime, end_time: endTime, note }),
stop: (id: string) => request<Entry>('POST', `/entries/${id}/stop`),
start: async (note = ''): Promise<Entry> => {
try {
return await request<Entry>('POST', '/entries/start', { note });
} catch (e) {
if (!isNetworkError(e)) throw e;
const entry: Entry = {
id: crypto.randomUUID(),
start_time: Date.now(),
end_time: null,
auto_stopped: false,
note,
day_key: new Date().toISOString().slice(0, 10),
updated_at: Date.now()
};
await db.entries.put(entry);
await enqueue('entries', entry.id, 'upsert', entry);
return entry;
}
},
createInterval: async (startTime: number, endTime: number, note = ''): Promise<Entry> => {
try {
return await request<Entry>('POST', '/entries', { start_time: startTime, end_time: endTime, note });
} catch (e) {
if (!isNetworkError(e)) throw e;
const entry: Entry = {
id: crypto.randomUUID(),
start_time: startTime,
end_time: endTime,
auto_stopped: false,
note,
day_key: new Date(startTime).toISOString().slice(0, 10),
updated_at: Date.now()
};
await db.entries.put(entry);
await enqueue('entries', entry.id, 'upsert', entry);
return entry;
}
},
stop: async (id: string): Promise<Entry> => {
try {
return await request<Entry>('POST', `/entries/${id}/stop`);
} catch (e) {
if (!isNetworkError(e)) throw e;
const existing = await db.entries.get(id);
if (!existing) throw e;
const updated: Entry = { ...existing, end_time: Date.now(), updated_at: Date.now() };
await db.entries.put(updated);
await enqueue('entries', id, 'upsert', updated);
return updated;
}
},
list: (from?: string, to?: string) => {
const params = new URLSearchParams();
if (from) params.set('from', from);
if (to) params.set('to', to);
return request<Entry[]>('GET', `/entries?${params}`);
},
update: (id: string, body: { start_time?: number; end_time?: number; note?: string }) =>
request<Entry>('PUT', `/entries/${id}`, body),
delete: (id: string) => request<void>('DELETE', `/entries/${id}`)
update: async (id: string, body: { start_time?: number; end_time?: number; note?: string }): Promise<Entry> => {
try {
return await request<Entry>('PUT', `/entries/${id}`, body);
} catch (e) {
if (!isNetworkError(e)) throw e;
const existing = await db.entries.get(id);
if (!existing) throw e;
const updated: Entry = { ...existing, ...body, updated_at: Date.now() };
await db.entries.put(updated);
await enqueue('entries', id, 'upsert', updated);
return updated;
}
},
delete: async (id: string): Promise<void> => {
try {
return await request<void>('DELETE', `/entries/${id}`);
} catch (e) {
if (!isNetworkError(e)) throw e;
await db.entries.delete(id);
await enqueue('entries', id, 'delete', { id, updated_at: Date.now() });
}
}
};
// ─── Days ────────────────────────────────────────────────────────────────────
@@ -149,11 +242,51 @@ export const weeks = {
export const balance = {
list: () => request<BalanceAdjustment[]>('GET', '/balance/adjustments'),
create: (body: { delta_ms: number; note?: string; effective_at?: number }) =>
request<BalanceAdjustment>('POST', '/balance/adjustments', body),
update: (id: string, body: { delta_ms: number; note?: string; effective_at?: number }) =>
request<BalanceAdjustment>('PUT', `/balance/adjustments/${id}`, body),
delete: (id: string) => request<void>('DELETE', `/balance/adjustments/${id}`)
create: async (body: { delta_ms: number; note?: string; effective_at?: number }): Promise<BalanceAdjustment> => {
try {
return await request<BalanceAdjustment>('POST', '/balance/adjustments', body);
} catch (e) {
if (!isNetworkError(e)) throw e;
const now = Date.now();
const adj: BalanceAdjustment = {
id: crypto.randomUUID(),
delta_ms: body.delta_ms,
note: body.note ?? '',
effective_at: body.effective_at ?? now,
created_at: now,
updated_at: now
};
await db.balance_adjustments.put(adj);
await enqueue('balance_adjustments', adj.id, 'upsert', adj);
return adj;
}
},
update: async (id: string, body: { delta_ms: number; note?: string; effective_at?: number }): Promise<BalanceAdjustment> => {
try {
return await request<BalanceAdjustment>('PUT', `/balance/adjustments/${id}`, body);
} catch (e) {
if (!isNetworkError(e)) throw e;
const existing = await db.balance_adjustments.get(id);
if (!existing) throw e;
const updated: BalanceAdjustment = { ...existing, ...body, updated_at: Date.now() };
await db.balance_adjustments.put(updated);
await enqueue('balance_adjustments', id, 'upsert', updated);
return updated;
}
},
delete: async (id: string): Promise<void> => {
try {
return await request<void>('DELETE', `/balance/adjustments/${id}`);
} catch (e) {
if (!isNetworkError(e)) throw e;
const existing = await db.balance_adjustments.get(id);
await db.balance_adjustments.delete(id);
await enqueue('balance_adjustments', id, 'delete', { id, updated_at: existing?.updated_at ?? Date.now() });
}
}
};
// ─── Settings ────────────────────────────────────────────────────────────────
@@ -161,19 +294,58 @@ export const balance = {
export const settings = {
current: () => request<Settings>('GET', '/settings'),
history: () => request<Settings[]>('GET', '/settings/history'),
upsert: (body: {
upsert: async (body: {
effective_from: string;
hours_per_week: number;
workdays_mask: number;
timezone: string;
}) => request<Settings>('PUT', '/settings', body),
update: (id: number, body: {
}): Promise<Settings> => {
try {
return await request<Settings>('PUT', '/settings', body);
} catch (e) {
if (!isNetworkError(e)) throw e;
const now = Date.now();
const s: Settings = {
id: crypto.randomUUID(),
...body,
created_at: now,
updated_at: now
};
await db.settings_history.put(s);
await enqueue('settings_history', s.id, 'upsert', s);
return s;
}
},
update: async (id: string, body: {
effective_from: string;
hours_per_week: number;
workdays_mask: number;
timezone: string;
}) => request<Settings>('PUT', `/settings/history/${id}`, body),
delete: (id: number) => request<void>('DELETE', `/settings/history/${id}`)
}): Promise<Settings> => {
try {
return await request<Settings>('PUT', `/settings/history/${id}`, body);
} catch (e) {
if (!isNetworkError(e)) throw e;
const existing = await db.settings_history.get(id);
if (!existing) throw e;
const updated: Settings = { ...existing, ...body, updated_at: Date.now() };
await db.settings_history.put(updated);
await enqueue('settings_history', id, 'upsert', updated);
return updated;
}
},
delete: async (id: string): Promise<void> => {
try {
return await request<void>('DELETE', `/settings/history/${id}`);
} catch (e) {
if (!isNetworkError(e)) throw e;
await db.settings_history.delete(id);
await enqueue('settings_history', id, 'delete', { id, updated_at: Date.now() });
}
}
};
// ─── Health ──────────────────────────────────────────────────────────────────

View File

@@ -3,7 +3,7 @@ import type { Entry, ClosedDay, ClosedWeek, Settings, BalanceAdjustment } from '
export interface OutboxItem {
id?: number; // auto-increment
entity: string; // 'entries' | 'closed_days' | 'closed_weeks' | 'settings' | 'balance_adjustments'
entity: string; // 'entries' | 'closed_days' | 'closed_weeks' | 'settings_history' | 'balance_adjustments'
entity_id: string;
op: 'upsert' | 'delete';
payload: string; // JSON
@@ -14,7 +14,7 @@ export class WotraDB extends Dexie {
entries!: Table<Entry, string>;
closed_days!: Table<ClosedDay, string>;
closed_weeks!: Table<ClosedWeek, string>;
settings_history!: Table<Settings, number>;
settings_history!: Table<Settings, string>; // UUID PK as of v3
balance_adjustments!: Table<BalanceAdjustment, string>;
outbox!: Table<OutboxItem, number>;
meta!: Table<{ key: string; value: string }, string>;
@@ -32,6 +32,13 @@ export class WotraDB extends Dexie {
this.version(2).stores({
balance_adjustments: 'id, effective_at, updated_at'
});
// v3: settings_history switches from integer autoincrement PK to UUID TEXT PK.
// Clear the table on upgrade; the next pull will repopulate it from the server.
this.version(3).stores({
settings_history: 'id, effective_from, updated_at'
}).upgrade(tx => {
return tx.table('settings_history').clear();
});
}
}

View File

@@ -1,10 +1,15 @@
/**
* Sync layer: push local outbox items to server, pull server changes.
* Uses last-write-wins based on updated_at.
* Sync layer: push local outbox to server, pull server changes.
*
* Online-first, offline-fallback:
* - Mutations go directly to the server via REST; on network error they are
* written to Dexie + outbox by the API client.
* - This loop pushes any queued outbox items, then pulls new server changes.
* - On 410 Gone the client is stale: wipe all tables and re-pull from 0.
*/
import { db, getLastVersion, setLastVersion } from './db';
import type { OutboxItem } from './db';
import { setToken, hasToken } from '$lib/api/client';
import { hasToken } from '$lib/api/client';
const API = '/api';
@@ -16,42 +21,69 @@ function headers() {
};
}
// ─── Push ─────────────────────────────────────────────────────────────────────
export async function pushOutbox(): Promise<void> {
if (!hasToken()) return;
const items = await db.outbox.toArray();
if (items.length === 0) return;
const res = await fetch(`${API}/sync/push`, {
method: 'POST',
headers: headers(),
body: JSON.stringify({ changes: items.map((i) => ({ ...JSON.parse(i.payload), _op: i.op, _entity: i.entity })) })
});
if (!res.ok) return; // will retry on next sync
let res: Response;
try {
res = await fetch(`${API}/sync/push`, {
method: 'POST',
headers: headers(),
body: JSON.stringify({
changes: items.map((i) => ({
entity: i.entity,
entity_id: i.entity_id,
op: i.op,
payload: JSON.parse(i.payload)
}))
})
});
} catch {
return; // network unavailable; retry next cycle
}
if (!res.ok) return;
const { applied } = await res.json() as { applied: string[]; conflicts: string[] };
// Remove applied items from outbox
const appliedIds = new Set(applied);
const toDelete = items.filter((i) => i.entity_id && appliedIds.has(i.entity_id)).map((i) => i.id!);
const { applied } = (await res.json()) as { applied: string[]; skipped: string[] };
const appliedSet = new Set(applied);
const toDelete = items
.filter((i) => appliedSet.has(i.entity_id))
.map((i) => i.id!);
if (toDelete.length > 0) await db.outbox.bulkDelete(toDelete);
}
// ─── Pull ─────────────────────────────────────────────────────────────────────
export async function pullChanges(): Promise<void> {
if (!hasToken()) return;
const since = await getLastVersion();
const res = await fetch(`${API}/sync/pull`, {
method: 'POST',
headers: headers(),
body: JSON.stringify({ since_version: since })
});
let res: Response;
try {
res = await fetch(`${API}/sync/pull?since=${since}`, { headers: headers() });
} catch {
return; // network unavailable
}
if (res.status === 410) {
// Server has pruned data the client hasn't seen — full re-sync.
await coldStart();
return pullChanges();
}
if (!res.ok) return;
const { changes, server_version } = await res.json() as {
const { changes, server_version } = (await res.json()) as {
changes: Array<{ entity: string; entity_id: string; op: string; payload: string }>;
server_version: number;
};
for (const change of changes) {
const data = JSON.parse(change.payload);
const data = typeof change.payload === 'string'
? JSON.parse(change.payload)
: change.payload;
if (change.op === 'delete') {
await applyDelete(change.entity, change.entity_id);
} else {
@@ -61,31 +93,50 @@ export async function pullChanges(): Promise<void> {
await setLastVersion(server_version);
}
// ─── Cold start ───────────────────────────────────────────────────────────────
/** Wipe all local tables and reset version so the next pull fetches everything. */
export async function coldStart(): Promise<void> {
await Promise.all([
db.entries.clear(),
db.closed_days.clear(),
db.closed_weeks.clear(),
db.settings_history.clear(),
db.balance_adjustments.clear()
]);
await setLastVersion(0);
}
// ─── Apply helpers ────────────────────────────────────────────────────────────
async function applyUpsert(entity: string, data: unknown) {
switch (entity) {
case 'entries': await db.entries.put(data as any); break;
case 'closed_days': await db.closed_days.put(data as any); break;
case 'closed_weeks': await db.closed_weeks.put(data as any); break;
case 'settings_history': await db.settings_history.put(data as any); break;
case 'entries': await db.entries.put(data as any); break;
case 'closed_days': await db.closed_days.put(data as any); break;
case 'closed_weeks': await db.closed_weeks.put(data as any); break;
case 'settings_history': await db.settings_history.put(data as any); break;
case 'balance_adjustments': await db.balance_adjustments.put(data as any); break;
}
}
async function applyDelete(entity: string, id: string) {
switch (entity) {
case 'entries': await db.entries.delete(id); break;
case 'closed_days': await db.closed_days.delete(id); break;
case 'closed_weeks': await db.closed_weeks.delete(id); break;
case 'entries': await db.entries.delete(id); break;
case 'closed_days': await db.closed_days.delete(id); break;
case 'closed_weeks': await db.closed_weeks.delete(id); break;
case 'settings_history': await db.settings_history.delete(id); break;
case 'balance_adjustments': await db.balance_adjustments.delete(id); break;
}
}
// ─── Sync loop ────────────────────────────────────────────────────────────────
let syncInterval: ReturnType<typeof setInterval> | null = null;
/** Start background sync loop (every 30 seconds). */
export function startSync() {
if (syncInterval) return;
sync(); // immediate
sync(); // immediate first run
syncInterval = setInterval(sync, 30_000);
}
@@ -101,6 +152,6 @@ async function sync() {
await pushOutbox();
await pullChanges();
} catch {
// Network unavailable — will retry
// Unexpected error — will retry on next interval.
}
}

View File

@@ -19,7 +19,7 @@
let formTimezone = $state('UTC');
// Inline edit state for history rows
let editingId = $state<number | null>(null);
let editingId = $state<string | null>(null);
let editEffectiveFrom = $state('');
let editHoursPerWeek = $state(0);
let editWorkdaysMask = $state(31);
@@ -109,7 +109,7 @@
editError = '';
}
async function saveEdit(id: number) {
async function saveEdit(id: string) {
editError = '';
try {
await settings.update(id, {
@@ -125,7 +125,7 @@
}
}
async function handleDelete(id: number) {
async function handleDelete(id: string) {
error = '';
try {
await settings.delete(id);