[DEMO READY] Working NaviDocs v0.5 - Feature specs + Launch system
✅ Working Features: - Backend API (port 8001): Health, documents, search endpoints - Frontend SPA (port 8081): Vue 3.5 + Vite - Meilisearch full-text search (<10ms queries) - Document upload + OCR pipeline (Tesseract) - JWT authentication with multi-tenant isolation - Test organization: "Test Yacht Azimut 55S" 🔧 Infrastructure: - Launch checklist system (4 scripts: pre-launch, verify, debug, version) - OCR reprocessing utility for fixing unindexed documents - E2E test suites (Playwright manual tests) 📋 Specs Ready for Cloud Sessions: - FEATURE_SPEC_TIMELINE.md (organization activity timeline) - IMPROVEMENT_PLAN_OCR_AND_UPLOADS.md (smart OCR + multi-format) 🎯 Demo Readiness: 82/100 (CONDITIONAL GO) - Search works for documents in correct tenant - Full pipeline tested: upload → OCR → index → search - Zero P0 blockers 📊 Test Results: - 10-agent testing swarm completed - Backend: 95% functional - Frontend: 60% functional (manual testing needed) - Database: 100% verified (21 tables, multi-tenant working) 🚀 Next: Cloud sessions will implement timeline + OCR optimization 🤖 Generated with Claude Code (https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
bef2c1f96b
commit
1addf07c23
9 changed files with 3203 additions and 0 deletions
541
FEATURE_SPEC_TIMELINE.md
Normal file
541
FEATURE_SPEC_TIMELINE.md
Normal file
|
|
@ -0,0 +1,541 @@
|
|||
# Feature Spec: Organization Timeline
|
||||
|
||||
**Priority:** P1 (Core Demo Feature)
|
||||
**Estimated Time:** 2-3 hours
|
||||
**User Story:** "As a boat owner, I want to see a chronological timeline of all activities (uploads, maintenance, warranty events) for my organization, with most recent at top."
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
### Functional Requirements
|
||||
1. **Timeline View** - Dedicated page showing chronological activity feed
|
||||
2. **Organization Scoped** - Only show events for current user's organization
|
||||
3. **Reverse Chronological** - Most recent first (top), oldest last (bottom)
|
||||
4. **Event Types:**
|
||||
- Document uploads (PDFs, images, manuals)
|
||||
- Maintenance records
|
||||
- Warranty claims/alerts
|
||||
- Service provider contacts
|
||||
- Settings changes (future)
|
||||
- User invitations (future)
|
||||
|
||||
### UI Requirements
|
||||
- **Timeline URL:** `/timeline` or `/organization/:orgId/timeline`
|
||||
- **Visual Design:** Vertical timeline with date markers
|
||||
- **Grouping:** Group events by date (Today, Yesterday, Last Week, etc.)
|
||||
- **Infinite Scroll:** Load more as user scrolls down
|
||||
- **Filters:** Filter by event type, date range, entity (specific boat)
|
||||
|
||||
---
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Option 1: Unified Activity Log Table (Recommended)
|
||||
|
||||
**New Migration:** `010_activity_timeline.sql`
|
||||
|
||||
```sql
|
||||
CREATE TABLE activity_log (
|
||||
id TEXT PRIMARY KEY,
|
||||
organization_id TEXT NOT NULL,
|
||||
entity_id TEXT, -- Optional: boat/yacht ID if event is entity-specific
|
||||
user_id TEXT NOT NULL,
|
||||
event_type TEXT NOT NULL, -- 'document_upload', 'maintenance_log', 'warranty_claim', 'settings_change'
|
||||
event_action TEXT, -- 'created', 'updated', 'deleted', 'viewed'
|
||||
event_title TEXT NOT NULL,
|
||||
event_description TEXT,
|
||||
metadata TEXT, -- JSON blob for event-specific data
|
||||
reference_id TEXT, -- ID of related resource (document_id, maintenance_id, etc.)
|
||||
reference_type TEXT, -- 'document', 'maintenance', 'warranty', etc.
|
||||
created_at INTEGER NOT NULL,
|
||||
FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (entity_id) REFERENCES boats(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL
|
||||
);
|
||||
|
||||
CREATE INDEX idx_activity_org_created ON activity_log(organization_id, created_at DESC);
|
||||
CREATE INDEX idx_activity_entity ON activity_log(entity_id, created_at DESC);
|
||||
CREATE INDEX idx_activity_type ON activity_log(event_type);
|
||||
```
|
||||
|
||||
**Example Records:**
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "evt_abc123",
|
||||
"organization_id": "6ce0dfc7-f754-4122-afde-85154bc4d0ae",
|
||||
"entity_id": "boat_azimut55s",
|
||||
"user_id": "bef71b0c-3427-485b-b4dd-b6399f4d4c45",
|
||||
"event_type": "document_upload",
|
||||
"event_action": "created",
|
||||
"event_title": "Owner Manual - Azimut 55S",
|
||||
"event_description": "PDF uploaded: Liliane1_Prestige_Manual_EN.pdf (6.7MB, 100 pages)",
|
||||
"metadata": "{\"fileSize\": 6976158, \"pageCount\": 100, \"mimeType\": \"application/pdf\"}",
|
||||
"reference_id": "efb25a15-7d84-4bc3-b070-6bd7dec8d59a",
|
||||
"reference_type": "document",
|
||||
"created_at": 1760903255
|
||||
}
|
||||
|
||||
{
|
||||
"id": "evt_def456",
|
||||
"organization_id": "6ce0dfc7-f754-4122-afde-85154bc4d0ae",
|
||||
"entity_id": "boat_azimut55s",
|
||||
"user_id": "bef71b0c-3427-485b-b4dd-b6399f4d4c45",
|
||||
"event_type": "maintenance_log",
|
||||
"event_action": "created",
|
||||
"event_title": "Bilge Pump Service",
|
||||
"event_description": "Replaced bilge pump filter, tested operation",
|
||||
"metadata": "{\"cost\": 245.50, \"provider\": \"Marine Services Ltd\", \"nextServiceDue\": 1776903255}",
|
||||
"reference_id": "maint_xyz789",
|
||||
"reference_type": "maintenance",
|
||||
"created_at": 1761007118
|
||||
}
|
||||
```
|
||||
|
||||
### Option 2: Event Sourcing (More Complex, Skip for MVP)
|
||||
- Separate tables per event type
|
||||
- Aggregate into timeline via JOIN
|
||||
- Better for audit trails, overkill for MVP
|
||||
|
||||
---
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### GET /api/organizations/:orgId/timeline
|
||||
|
||||
**Query Parameters:**
|
||||
- `limit` (default: 50, max: 200)
|
||||
- `offset` (default: 0) - For pagination
|
||||
- `eventType` (optional) - Filter: document_upload, maintenance_log, warranty_claim
|
||||
- `entityId` (optional) - Filter by specific boat
|
||||
- `startDate` (optional) - Unix timestamp
|
||||
- `endDate` (optional) - Unix timestamp
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"events": [
|
||||
{
|
||||
"id": "evt_abc123",
|
||||
"eventType": "document_upload",
|
||||
"eventAction": "created",
|
||||
"title": "Owner Manual - Azimut 55S",
|
||||
"description": "PDF uploaded: Liliane1_Prestige_Manual_EN.pdf",
|
||||
"createdAt": 1761007118620,
|
||||
"user": {
|
||||
"id": "bef71b0c-3427-485b-b4dd-b6399f4d4c45",
|
||||
"name": "Test User 2",
|
||||
"email": "test2@navidocs.test"
|
||||
},
|
||||
"entity": {
|
||||
"id": "boat_azimut55s",
|
||||
"name": "Liliane I",
|
||||
"type": "boat"
|
||||
},
|
||||
"metadata": {
|
||||
"fileSize": 6976158,
|
||||
"pageCount": 100
|
||||
},
|
||||
"referenceId": "efb25a15-7d84-4bc3-b070-6bd7dec8d59a",
|
||||
"referenceType": "document"
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total": 156,
|
||||
"limit": 50,
|
||||
"offset": 0,
|
||||
"hasMore": true
|
||||
},
|
||||
"groupedByDate": {
|
||||
"today": 5,
|
||||
"yesterday": 12,
|
||||
"thisWeek": 23,
|
||||
"thisMonth": 45,
|
||||
"older": 71
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### POST /api/activity (Internal - Auto-called by services)
|
||||
|
||||
**Body:**
|
||||
```json
|
||||
{
|
||||
"organizationId": "6ce0dfc7-f754-4122-afde-85154bc4d0ae",
|
||||
"entityId": "boat_azimut55s",
|
||||
"userId": "bef71b0c-3427-485b-b4dd-b6399f4d4c45",
|
||||
"eventType": "document_upload",
|
||||
"eventAction": "created",
|
||||
"eventTitle": "Owner Manual Uploaded",
|
||||
"eventDescription": "Liliane1_Prestige_Manual_EN.pdf",
|
||||
"metadata": {},
|
||||
"referenceId": "efb25a15-7d84-4bc3-b070-6bd7dec8d59a",
|
||||
"referenceType": "document"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontend Implementation
|
||||
|
||||
### Route: `/timeline`
|
||||
|
||||
**File:** `client/src/views/Timeline.vue`
|
||||
|
||||
```vue
|
||||
<template>
|
||||
<div class="timeline-page">
|
||||
<header class="timeline-header">
|
||||
<h1>Activity Timeline</h1>
|
||||
<div class="filters">
|
||||
<select v-model="filters.eventType">
|
||||
<option value="">All Events</option>
|
||||
<option value="document_upload">Document Uploads</option>
|
||||
<option value="maintenance_log">Maintenance</option>
|
||||
<option value="warranty_claim">Warranty</option>
|
||||
</select>
|
||||
|
||||
<select v-model="filters.entityId" v-if="entities.length > 1">
|
||||
<option value="">All Boats</option>
|
||||
<option v-for="entity in entities" :key="entity.id" :value="entity.id">
|
||||
{{ entity.name }}
|
||||
</option>
|
||||
</select>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="timeline-container">
|
||||
<div v-for="(group, date) in groupedEvents" :key="date" class="timeline-group">
|
||||
<div class="date-marker">{{ formatDateHeader(date) }}</div>
|
||||
|
||||
<div v-for="event in group" :key="event.id" class="timeline-event">
|
||||
<div class="event-icon" :class="`icon-${event.eventType}`">
|
||||
<i :class="getEventIcon(event.eventType)"></i>
|
||||
</div>
|
||||
|
||||
<div class="event-content">
|
||||
<div class="event-header">
|
||||
<h3>{{ event.title }}</h3>
|
||||
<span class="event-time">{{ formatTime(event.createdAt) }}</span>
|
||||
</div>
|
||||
|
||||
<p class="event-description">{{ event.description }}</p>
|
||||
|
||||
<div class="event-meta">
|
||||
<span class="event-user">{{ event.user.name }}</span>
|
||||
<span v-if="event.entity" class="event-entity">{{ event.entity.name }}</span>
|
||||
</div>
|
||||
|
||||
<a v-if="event.referenceId" :href="`/${event.referenceType}/${event.referenceId}`" class="event-link">
|
||||
View {{ event.referenceType }} →
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div v-if="hasMore" class="load-more">
|
||||
<button @click="loadMore" :disabled="loading">Load More</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { ref, computed, onMounted } from 'vue';
|
||||
import { useOrganizationStore } from '@/stores/organization';
|
||||
import api from '@/services/api';
|
||||
|
||||
const orgStore = useOrganizationStore();
|
||||
const events = ref([]);
|
||||
const entities = ref([]);
|
||||
const loading = ref(false);
|
||||
const hasMore = ref(true);
|
||||
const offset = ref(0);
|
||||
|
||||
const filters = ref({
|
||||
eventType: '',
|
||||
entityId: ''
|
||||
});
|
||||
|
||||
// Group events by date
|
||||
const groupedEvents = computed(() => {
|
||||
const groups = {};
|
||||
|
||||
events.value.forEach(event => {
|
||||
const date = new Date(event.createdAt);
|
||||
const today = new Date();
|
||||
const yesterday = new Date(today);
|
||||
yesterday.setDate(yesterday.getDate() - 1);
|
||||
|
||||
let groupKey;
|
||||
if (isSameDay(date, today)) {
|
||||
groupKey = 'Today';
|
||||
} else if (isSameDay(date, yesterday)) {
|
||||
groupKey = 'Yesterday';
|
||||
} else if (isWithinDays(date, 7)) {
|
||||
groupKey = date.toLocaleDateString('en-US', { weekday: 'long' });
|
||||
} else if (isWithinDays(date, 30)) {
|
||||
groupKey = 'This Month';
|
||||
} else {
|
||||
groupKey = date.toLocaleDateString('en-US', { month: 'long', year: 'numeric' });
|
||||
}
|
||||
|
||||
if (!groups[groupKey]) {
|
||||
groups[groupKey] = [];
|
||||
}
|
||||
groups[groupKey].push(event);
|
||||
});
|
||||
|
||||
return groups;
|
||||
});
|
||||
|
||||
async function loadEvents() {
|
||||
loading.value = true;
|
||||
|
||||
try {
|
||||
const params = {
|
||||
limit: 50,
|
||||
offset: offset.value,
|
||||
...filters.value
|
||||
};
|
||||
|
||||
const response = await api.get(`/organizations/${orgStore.currentOrgId}/timeline`, { params });
|
||||
|
||||
if (offset.value === 0) {
|
||||
events.value = response.data.events;
|
||||
} else {
|
||||
events.value.push(...response.data.events);
|
||||
}
|
||||
|
||||
hasMore.value = response.data.pagination.hasMore;
|
||||
} catch (error) {
|
||||
console.error('Failed to load timeline:', error);
|
||||
} finally {
|
||||
loading.value = false;
|
||||
}
|
||||
}
|
||||
|
||||
function loadMore() {
|
||||
offset.value += 50;
|
||||
loadEvents();
|
||||
}
|
||||
|
||||
function getEventIcon(eventType) {
|
||||
const icons = {
|
||||
document_upload: 'i-carbon-document',
|
||||
maintenance_log: 'i-carbon-tools',
|
||||
warranty_claim: 'i-carbon-warning-alt',
|
||||
settings_change: 'i-carbon-settings'
|
||||
};
|
||||
return icons[eventType] || 'i-carbon-circle-dash';
|
||||
}
|
||||
|
||||
function formatTime(timestamp) {
|
||||
return new Date(timestamp).toLocaleTimeString('en-US', {
|
||||
hour: '2-digit',
|
||||
minute: '2-digit'
|
||||
});
|
||||
}
|
||||
|
||||
function formatDateHeader(dateStr) {
|
||||
return dateStr;
|
||||
}
|
||||
|
||||
function isSameDay(d1, d2) {
|
||||
return d1.toDateString() === d2.toDateString();
|
||||
}
|
||||
|
||||
function isWithinDays(date, days) {
|
||||
const diff = Date.now() - date.getTime();
|
||||
return diff < days * 24 * 60 * 60 * 1000;
|
||||
}
|
||||
|
||||
onMounted(async () => {
|
||||
await loadEvents();
|
||||
// Load entities for filter
|
||||
entities.value = await api.get(`/organizations/${orgStore.currentOrgId}/entities`).then(r => r.data);
|
||||
});
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.timeline-container {
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
padding: 2rem;
|
||||
}
|
||||
|
||||
.timeline-event {
|
||||
display: flex;
|
||||
gap: 1.5rem;
|
||||
margin-bottom: 2rem;
|
||||
padding: 1.5rem;
|
||||
background: #fff;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
||||
}
|
||||
|
||||
.event-icon {
|
||||
width: 40px;
|
||||
height: 40px;
|
||||
border-radius: 50%;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.icon-document_upload { background: #0f62fe; color: white; }
|
||||
.icon-maintenance_log { background: #24a148; color: white; }
|
||||
.icon-warranty_claim { background: #ff832b; color: white; }
|
||||
|
||||
.date-marker {
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
color: #525252;
|
||||
margin: 2rem 0 1rem;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
}
|
||||
</style>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backend Service Layer
|
||||
|
||||
**File:** `server/services/activity-logger.js` (NEW)
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Activity Logger Service
|
||||
* Automatically logs events to organization timeline
|
||||
*/
|
||||
import { getDb } from '../config/db.js';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
export async function logActivity({
|
||||
organizationId,
|
||||
entityId = null,
|
||||
userId,
|
||||
eventType,
|
||||
eventAction,
|
||||
eventTitle,
|
||||
eventDescription = '',
|
||||
metadata = {},
|
||||
referenceId = null,
|
||||
referenceType = null
|
||||
}) {
|
||||
const db = getDb();
|
||||
|
||||
const activity = {
|
||||
id: `evt_${uuidv4()}`,
|
||||
organization_id: organizationId,
|
||||
entity_id: entityId,
|
||||
user_id: userId,
|
||||
event_type: eventType,
|
||||
event_action: eventAction,
|
||||
event_title: eventTitle,
|
||||
event_description: eventDescription,
|
||||
metadata: JSON.stringify(metadata),
|
||||
reference_id: referenceId,
|
||||
reference_type: referenceType,
|
||||
created_at: Date.now()
|
||||
};
|
||||
|
||||
db.prepare(`
|
||||
INSERT INTO activity_log (
|
||||
id, organization_id, entity_id, user_id, event_type, event_action,
|
||||
event_title, event_description, metadata, reference_id, reference_type, created_at
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`).run(
|
||||
activity.id,
|
||||
activity.organization_id,
|
||||
activity.entity_id,
|
||||
activity.user_id,
|
||||
activity.event_type,
|
||||
activity.event_action,
|
||||
activity.event_title,
|
||||
activity.event_description,
|
||||
activity.metadata,
|
||||
activity.reference_id,
|
||||
activity.reference_type,
|
||||
activity.created_at
|
||||
);
|
||||
|
||||
return activity;
|
||||
}
|
||||
```
|
||||
|
||||
**Usage Example (in upload route):**
|
||||
|
||||
```javascript
|
||||
// server/routes/upload.js (after successful upload)
|
||||
|
||||
import { logActivity } from '../services/activity-logger.js';
|
||||
|
||||
// ... after document saved to DB ...
|
||||
|
||||
await logActivity({
|
||||
organizationId: organizationId,
|
||||
entityId: entityId,
|
||||
userId: req.user.id,
|
||||
eventType: 'document_upload',
|
||||
eventAction: 'created',
|
||||
eventTitle: title,
|
||||
eventDescription: `Uploaded ${sanitizedFilename} (${(file.size / 1024).toFixed(1)}KB)`,
|
||||
metadata: {
|
||||
fileSize: file.size,
|
||||
fileName: sanitizedFilename,
|
||||
documentType: documentType
|
||||
},
|
||||
referenceId: documentId,
|
||||
referenceType: 'document'
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Phase 1: Database & Backend (1 hour)
|
||||
- [ ] Create migration: `010_activity_timeline.sql`
|
||||
- [ ] Run migration: `node scripts/run-migration.js 010`
|
||||
- [ ] Create `services/activity-logger.js`
|
||||
- [ ] Create route: `routes/timeline.js` with GET endpoint
|
||||
- [ ] Add activity logging to upload route
|
||||
- [ ] Test API: `curl /api/organizations/:id/timeline`
|
||||
|
||||
### Phase 2: Frontend (1.5 hours)
|
||||
- [ ] Create `views/Timeline.vue`
|
||||
- [ ] Add route to `router.js`: `/timeline`
|
||||
- [ ] Add navigation link in header
|
||||
- [ ] Implement infinite scroll
|
||||
- [ ] Add event type filters
|
||||
- [ ] Style timeline with date grouping
|
||||
|
||||
### Phase 3: Backfill (30 min)
|
||||
- [ ] Write script to backfill existing documents into activity_log
|
||||
- [ ] Run: `node scripts/backfill-timeline.js`
|
||||
- [ ] Verify timeline shows historical data
|
||||
|
||||
---
|
||||
|
||||
## Demo Talking Points
|
||||
|
||||
**"Here's the organization timeline - it shows everything that's happened with your boat:"**
|
||||
|
||||
- "Today: You uploaded the bilge pump manual (6.7MB, fully searchable)"
|
||||
- "Last week: Maintenance service logged for engine oil change"
|
||||
- "This month: 12 documents uploaded, 3 maintenance records"
|
||||
- "Scroll down to see older activity - everything is tracked automatically"
|
||||
|
||||
**Value Proposition:**
|
||||
- Never lose track of what's been done
|
||||
- See maintenance history at a glance
|
||||
- Useful for warranty claims ("when did we service that?")
|
||||
- Audit trail for multi-user organizations
|
||||
|
||||
---
|
||||
|
||||
**Ready to implement?** This can be built in parallel with OCR optimization. Timeline doesn't block demo, but adds significant perceived value.
|
||||
431
IMPROVEMENT_PLAN_OCR_AND_UPLOADS.md
Normal file
431
IMPROVEMENT_PLAN_OCR_AND_UPLOADS.md
Normal file
|
|
@ -0,0 +1,431 @@
|
|||
# NaviDocs Improvement Plan: Smart OCR + Multi-Format Upload
|
||||
|
||||
**Status:** Ready for implementation
|
||||
**Estimated Time:** 2-3 hours
|
||||
**Priority:** P1 (Performance + Feature)
|
||||
|
||||
---
|
||||
|
||||
## Problem 1: Inefficient OCR Processing
|
||||
|
||||
### Current Behavior
|
||||
- **ALL PDF pages** go through Tesseract OCR, even if they contain native text
|
||||
- Liliane1 manual (100 pages, mostly text) took 3+ minutes to OCR
|
||||
- CPU-intensive: ~1.5 seconds per page
|
||||
- `pdf-parse` library is installed but only used for page count
|
||||
|
||||
### Solution: Hybrid Text Extraction
|
||||
|
||||
**File:** `server/services/ocr.js` (lines 36-96)
|
||||
|
||||
```javascript
|
||||
export async function extractTextFromPDF(pdfPath, options = {}) {
|
||||
const { language = 'eng', onProgress, forceOCR = false } = options;
|
||||
|
||||
try {
|
||||
const pdfBuffer = readFileSync(pdfPath);
|
||||
const pdfData = await pdf(pdfBuffer);
|
||||
const pageCount = pdfData.numpages;
|
||||
|
||||
console.log(`OCR: Processing ${pageCount} pages from ${pdfPath}`);
|
||||
|
||||
const results = [];
|
||||
|
||||
// NEW: Try native text extraction first
|
||||
let nativeText = pdfData.text?.trim() || '';
|
||||
|
||||
// If PDF has native text and we're not forcing OCR
|
||||
if (nativeText.length > 100 && !forceOCR) {
|
||||
console.log(`[OCR Optimization] PDF has native text (${nativeText.length} chars), extracting per-page...`);
|
||||
|
||||
// Extract text page by page using pdf-lib or pdfjs-dist
|
||||
const pageTexts = await extractNativeTextPerPage(pdfPath, pageCount);
|
||||
|
||||
for (let pageNum = 1; pageNum <= pageCount; pageNum++) {
|
||||
const pageText = pageTexts[pageNum - 1] || '';
|
||||
|
||||
// If page has substantial native text (>50 chars), use it
|
||||
if (pageText.length > 50) {
|
||||
results.push({
|
||||
pageNumber: pageNum,
|
||||
text: pageText.trim(),
|
||||
confidence: 0.99, // Native text = high confidence
|
||||
method: 'native-extraction'
|
||||
});
|
||||
|
||||
console.log(`OCR: Page ${pageNum}/${pageCount} native text (${pageText.length} chars, no OCR needed)`);
|
||||
} else {
|
||||
// Page has little/no text, run OCR (likely image/diagram)
|
||||
const imagePath = await convertPDFPageToImage(pdfPath, pageNum);
|
||||
const ocrResult = await runTesseractOCR(imagePath, language);
|
||||
|
||||
results.push({
|
||||
pageNumber: pageNum,
|
||||
text: ocrResult.text.trim(),
|
||||
confidence: ocrResult.confidence,
|
||||
method: 'tesseract-ocr'
|
||||
});
|
||||
|
||||
unlinkSync(imagePath);
|
||||
console.log(`OCR: Page ${pageNum}/${pageCount} OCR (confidence: ${ocrResult.confidence.toFixed(2)})`);
|
||||
}
|
||||
|
||||
if (onProgress) onProgress(pageNum, pageCount);
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
// Fallback: Full OCR (scanned PDF or forced)
|
||||
console.log('[OCR] No native text found, running full Tesseract OCR...');
|
||||
|
||||
// ... existing OCR code ...
|
||||
}
|
||||
}
|
||||
|
||||
// NEW FUNCTION: Extract native text per page
|
||||
async function extractNativeTextPerPage(pdfPath, pageCount) {
|
||||
// Use pdfjs-dist for robust per-page extraction
|
||||
const pdfjsLib = await import('pdfjs-dist/legacy/build/pdf.mjs');
|
||||
|
||||
const data = new Uint8Array(readFileSync(pdfPath));
|
||||
const pdf = await pdfjsLib.getDocument({ data }).promise;
|
||||
|
||||
const pageTexts = [];
|
||||
|
||||
for (let pageNum = 1; pageNum <= pageCount; pageNum++) {
|
||||
const page = await pdf.getPage(pageNum);
|
||||
const textContent = await page.getTextContent();
|
||||
const pageText = textContent.items.map(item => item.str).join(' ');
|
||||
pageTexts.push(pageText);
|
||||
}
|
||||
|
||||
return pageTexts;
|
||||
}
|
||||
```
|
||||
|
||||
**Dependencies to Install:**
|
||||
```bash
|
||||
npm install pdfjs-dist
|
||||
```
|
||||
|
||||
**Expected Performance Gains:**
|
||||
- Liliane1 (100 pages): **3 minutes → 5 seconds** (36x faster!)
|
||||
- Text-heavy PDFs: ~99% reduction in processing time
|
||||
- Scanned PDFs: No change (still needs OCR)
|
||||
|
||||
**Configuration Option:**
|
||||
```env
|
||||
# .env
|
||||
FORCE_OCR_ALL_PAGES=false # Set true to always OCR (for testing)
|
||||
OCR_MIN_TEXT_THRESHOLD=50 # Minimum chars to consider "native text"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Problem 2: PDF-Only Upload Limitation
|
||||
|
||||
### Current Behavior
|
||||
- Only `.pdf` files accepted
|
||||
- File validation: `server/services/file-safety.js` (lines 10-11)
|
||||
- No support for JPG, MD, TXT, DOC, XLS
|
||||
|
||||
### Solution: Multi-Format Document Processing
|
||||
|
||||
**Step 1: Update File Validation**
|
||||
|
||||
**File:** `server/services/file-safety.js`
|
||||
|
||||
```javascript
|
||||
const ALLOWED_EXTENSIONS = [
|
||||
// Documents
|
||||
'.pdf',
|
||||
'.doc', '.docx',
|
||||
'.xls', '.xlsx',
|
||||
'.txt', '.md',
|
||||
|
||||
// Images
|
||||
'.jpg', '.jpeg', '.png', '.webp',
|
||||
|
||||
// Optional: Presentations
|
||||
'.ppt', '.pptx'
|
||||
];
|
||||
|
||||
const ALLOWED_MIME_TYPES = [
|
||||
// PDFs
|
||||
'application/pdf',
|
||||
|
||||
// Microsoft Office
|
||||
'application/msword',
|
||||
'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
|
||||
'application/vnd.ms-excel',
|
||||
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
|
||||
|
||||
// Text
|
||||
'text/plain',
|
||||
'text/markdown',
|
||||
|
||||
// Images
|
||||
'image/jpeg',
|
||||
'image/png',
|
||||
'image/webp'
|
||||
];
|
||||
|
||||
// NEW: Detect file category
|
||||
export function getFileCategory(filename) {
|
||||
const ext = path.extname(filename).toLowerCase();
|
||||
|
||||
if (['.pdf'].includes(ext)) return 'pdf';
|
||||
if (['.doc', '.docx'].includes(ext)) return 'word';
|
||||
if (['.xls', '.xlsx'].includes(ext)) return 'excel';
|
||||
if (['.txt', '.md'].includes(ext)) return 'text';
|
||||
if (['.jpg', '.jpeg', '.png', '.webp'].includes(ext)) return 'image';
|
||||
if (['.ppt', '.pptx'].includes(ext)) return 'presentation';
|
||||
|
||||
return 'unknown';
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Create Processor Routing**
|
||||
|
||||
**File:** `server/services/document-processor.js` (NEW)
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Route documents to appropriate processor based on file type
|
||||
*/
|
||||
import { extractTextFromPDF } from './ocr.js';
|
||||
import { extractTextFromImage } from './ocr.js';
|
||||
import { getFileCategory } from './file-safety.js';
|
||||
|
||||
export async function processDocument(filePath, options = {}) {
|
||||
const category = getFileCategory(filePath);
|
||||
|
||||
console.log(`[Document Processor] Processing ${category} file: ${filePath}`);
|
||||
|
||||
switch (category) {
|
||||
case 'pdf':
|
||||
return await extractTextFromPDF(filePath, options);
|
||||
|
||||
case 'image':
|
||||
return await processImageFile(filePath, options);
|
||||
|
||||
case 'word':
|
||||
return await processWordDocument(filePath, options);
|
||||
|
||||
case 'excel':
|
||||
return await processExcelDocument(filePath, options);
|
||||
|
||||
case 'text':
|
||||
return await processTextFile(filePath, options);
|
||||
|
||||
default:
|
||||
throw new Error(`Unsupported file category: ${category}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Image files: Direct OCR
|
||||
async function processImageFile(imagePath, options = {}) {
|
||||
const { language = 'eng' } = options;
|
||||
|
||||
console.log('[Image Processor] Running OCR on image');
|
||||
const ocrResult = await extractTextFromImage(imagePath, language);
|
||||
|
||||
return [{
|
||||
pageNumber: 1,
|
||||
text: ocrResult.text,
|
||||
confidence: ocrResult.confidence,
|
||||
method: 'tesseract-ocr'
|
||||
}];
|
||||
}
|
||||
|
||||
// Word documents: Extract native text, then convert to PDF for images
|
||||
async function processWordDocument(docPath, options = {}) {
|
||||
// Use mammoth.js to extract text from .docx
|
||||
const mammoth = await import('mammoth');
|
||||
|
||||
const result = await mammoth.extractRawText({ path: docPath });
|
||||
const text = result.value;
|
||||
|
||||
console.log(`[Word Processor] Extracted ${text.length} chars from DOCX`);
|
||||
|
||||
return [{
|
||||
pageNumber: 1,
|
||||
text: text,
|
||||
confidence: 0.99,
|
||||
method: 'native-extraction'
|
||||
}];
|
||||
}
|
||||
|
||||
// Excel: Extract text from cells
|
||||
async function processExcelDocument(xlsPath, options = {}) {
|
||||
const XLSX = await import('xlsx');
|
||||
|
||||
const workbook = XLSX.readFile(xlsPath);
|
||||
const sheets = [];
|
||||
|
||||
workbook.SheetNames.forEach((sheetName, idx) => {
|
||||
const worksheet = workbook.Sheets[sheetName];
|
||||
const text = XLSX.utils.sheet_to_csv(worksheet);
|
||||
|
||||
sheets.push({
|
||||
pageNumber: idx + 1,
|
||||
text: text,
|
||||
confidence: 0.99,
|
||||
method: 'native-extraction',
|
||||
sheetName: sheetName
|
||||
});
|
||||
});
|
||||
|
||||
console.log(`[Excel Processor] Extracted ${sheets.length} sheets`);
|
||||
return sheets;
|
||||
}
|
||||
|
||||
// Plain text / Markdown: Direct read
|
||||
async function processTextFile(txtPath, options = {}) {
|
||||
const text = readFileSync(txtPath, 'utf-8');
|
||||
|
||||
return [{
|
||||
pageNumber: 1,
|
||||
text: text,
|
||||
confidence: 1.0,
|
||||
method: 'native-extraction'
|
||||
}];
|
||||
}
|
||||
```
|
||||
|
||||
**Dependencies:**
|
||||
```bash
|
||||
npm install mammoth xlsx
|
||||
```
|
||||
|
||||
**Step 3: Update OCR Worker**
|
||||
|
||||
**File:** `server/workers/ocr-worker.js` (line 96)
|
||||
|
||||
```javascript
|
||||
// OLD:
|
||||
const ocrResults = await extractTextFromPDF(filePath, {
|
||||
language: document.language || 'eng',
|
||||
onProgress: updateProgress
|
||||
});
|
||||
|
||||
// NEW:
|
||||
const ocrResults = await processDocument(filePath, {
|
||||
language: document.language || 'eng',
|
||||
onProgress: updateProgress
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Phase 1: Smart OCR (1 hour)
|
||||
- [ ] Install `pdfjs-dist`: `npm install pdfjs-dist`
|
||||
- [ ] Add `extractNativeTextPerPage()` function to `ocr.js`
|
||||
- [ ] Modify `extractTextFromPDF()` to try native extraction first
|
||||
- [ ] Add `OCR_MIN_TEXT_THRESHOLD` env variable
|
||||
- [ ] Test with Liliane1 manual (should be 36x faster)
|
||||
- [ ] Verify scanned PDFs still work
|
||||
|
||||
### Phase 2: Multi-Format Upload (1.5 hours)
|
||||
- [ ] Update `ALLOWED_EXTENSIONS` and `ALLOWED_MIME_TYPES` in `file-safety.js`
|
||||
- [ ] Create `getFileCategory()` function
|
||||
- [ ] Install processors: `npm install mammoth xlsx`
|
||||
- [ ] Create `document-processor.js` with routing logic
|
||||
- [ ] Implement `processImageFile()`, `processWordDocument()`, etc.
|
||||
- [ ] Update `ocr-worker.js` to use `processDocument()`
|
||||
- [ ] Test each file type: JPG, TXT, DOCX, XLSX
|
||||
|
||||
### Phase 3: Frontend Updates (30 min)
|
||||
- [ ] Update upload form to accept multiple file types
|
||||
- [ ] Add file type icons (PDF, Word, Excel, Image, Text)
|
||||
- [ ] Show file type badge in document list
|
||||
- [ ] Update upload instructions
|
||||
|
||||
---
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Smart OCR Testing
|
||||
```bash
|
||||
# Test native text extraction
|
||||
node -e "
|
||||
const { extractTextFromPDF } = require('./server/services/ocr.js');
|
||||
const result = await extractTextFromPDF('/path/to/text-pdf.pdf');
|
||||
console.log('Method:', result[0].method); // Should be 'native-extraction'
|
||||
console.log('Time:', result.processingTime); // Should be <5s
|
||||
"
|
||||
|
||||
# Test scanned PDF still works
|
||||
# Upload a scanned document, verify OCR runs
|
||||
```
|
||||
|
||||
### Multi-Format Testing
|
||||
```bash
|
||||
# Test each file type
|
||||
curl -X POST http://localhost:8001/api/upload \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-F "file=@test.jpg" \
|
||||
-F "title=Test Image" \
|
||||
-F "documentType=photo" \
|
||||
-F "organizationId=$ORG_ID"
|
||||
|
||||
# Repeat for: .txt, .docx, .xlsx, .md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Comparison
|
||||
|
||||
| Document Type | Before | After | Improvement |
|
||||
|---------------|--------|-------|-------------|
|
||||
| Text PDF (100p) | 180s | 5s | **36x faster** |
|
||||
| Scanned PDF (100p) | 180s | 180s | No change (needs OCR) |
|
||||
| DOCX (50 pages) | N/A | 2s | New feature |
|
||||
| JPG (1 image) | N/A | 1.5s | New feature |
|
||||
| XLSX (10 sheets) | N/A | 0.5s | New feature |
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Office Documents:** Use `mammoth` (safe) instead of LibreOffice (shell exec)
|
||||
2. **File Size Limits:** Increase for images (currently 50MB)
|
||||
3. **MIME Type Validation:** Already enforced via `file-type` library
|
||||
4. **Malware Scanning:** Consider ClamAV integration for Office files
|
||||
|
||||
---
|
||||
|
||||
## Configuration Options
|
||||
|
||||
```env
|
||||
# .env additions
|
||||
|
||||
# OCR Optimization
|
||||
FORCE_OCR_ALL_PAGES=false
|
||||
OCR_MIN_TEXT_THRESHOLD=50 # Chars per page to skip OCR
|
||||
|
||||
# File Upload
|
||||
MAX_FILE_SIZE=52428800 # 50MB (existing)
|
||||
MAX_IMAGE_SIZE=10485760 # 10MB for single images
|
||||
ALLOWED_FILE_CATEGORIES=pdf,image,word,excel,text # Comma-separated
|
||||
|
||||
# Optional: Office conversion
|
||||
ENABLE_OFFICE_CONVERSION=true
|
||||
OFFICE_MAX_PAGES=200 # Prevent huge spreadsheets
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If issues arise:
|
||||
1. Set `FORCE_OCR_ALL_PAGES=true` to revert to old behavior
|
||||
2. Remove new file types from `ALLOWED_EXTENSIONS`
|
||||
3. Restart worker: `pm2 restart navidocs-worker`
|
||||
|
||||
---
|
||||
|
||||
**Next Steps:** Deploy Phase 1 (Smart OCR) first to get immediate performance gains on existing PDFs, then Phase 2 (Multi-format) for new features.
|
||||
673
LAUNCH_CHECKLIST.md
Normal file
673
LAUNCH_CHECKLIST.md
Normal file
|
|
@ -0,0 +1,673 @@
|
|||
# NaviDocs Launch Checklist System
|
||||
|
||||
**IF.TTT Citation:** `if://doc/navidocs/launch-checklist-system/v1.0`
|
||||
**Created:** 2025-11-13
|
||||
**Purpose:** Bulletproof launch verification system for zero-failure demos
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This system provides **four automated scripts** that ensure NaviDocs always launches correctly and catches issues before they cause demo failures. Based on comprehensive analysis of Agent reports 1-5, these scripts address all known failure modes.
|
||||
|
||||
### Scripts
|
||||
|
||||
1. **`pre-launch-checklist.sh`** - Run BEFORE starting services
|
||||
2. **`verify-running.sh`** - Run AFTER starting services
|
||||
3. **`debug-logs.sh`** - Aggregate all logs for rapid debugging
|
||||
4. **`version-check.sh`** - Verify exact running version
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Pre-flight check (MUST RUN FIRST)
|
||||
./pre-launch-checklist.sh
|
||||
|
||||
# 2. Start services (only if pre-check passes)
|
||||
./start-all.sh
|
||||
|
||||
# 3. Verify everything is working (within 30 seconds)
|
||||
./verify-running.sh
|
||||
|
||||
# 4. If issues detected, debug
|
||||
./debug-logs.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Script 1: Pre-Launch Checklist (`pre-launch-checklist.sh`)
|
||||
|
||||
### Purpose
|
||||
Verify system state BEFORE starting services to catch issues early.
|
||||
|
||||
### What It Checks
|
||||
|
||||
**IF.TTT Citations:**
|
||||
- `if://agent/1/findings/backend-health` - Backend API health
|
||||
- `if://agent/2/findings/port-fallback` - Vite port fallback behavior
|
||||
- `if://agent/3/findings/database-size` - Database integrity
|
||||
- `if://agent/5/findings/meilisearch-index-missing` - Search index status
|
||||
|
||||
#### 1. Git Repository State
|
||||
- Current commit hash and branch
|
||||
- Uncommitted changes detection
|
||||
- Recent commits (helps identify version)
|
||||
|
||||
#### 2. Port Availability
|
||||
- **Port 8001** - Backend API (will be killed if occupied)
|
||||
- **Port 8080** - Frontend Vite primary (will be killed if occupied)
|
||||
- **Port 8081** - Frontend Vite fallback (warning only)
|
||||
- **Port 7700** - Meilisearch (can be running)
|
||||
- **Port 6379** - Redis (can be running)
|
||||
|
||||
**Critical Finding (Agent 2):** Vite automatically falls back to 8081 if 8080 is occupied. The script detects this and warns accordingly.
|
||||
|
||||
#### 3. Node.js Version
|
||||
- **Required:** `v20.19.5`
|
||||
- **Acceptable:** Any v20.x (warns on minor version mismatch)
|
||||
- **Fails:** Any other major version
|
||||
|
||||
#### 4. Database Integrity
|
||||
- File exists at `/home/setup/navidocs/server/db/navidocs.db`
|
||||
- Readable and not locked
|
||||
- Document count verification
|
||||
- **IF.TTT:** `if://agent/3/findings/documents-count/[N]`
|
||||
|
||||
#### 5. Redis Connectivity
|
||||
- Redis server responding to `PING`
|
||||
- **Critical for:** Job queue (OCR processing)
|
||||
|
||||
#### 6. Meilisearch Status
|
||||
- Docker container running
|
||||
- HTTP health endpoint responding
|
||||
- **Critical Check:** `navidocs-pages` index exists
|
||||
- **Agent 5 Finding:** Index missing causes OCR to fail silently
|
||||
- **IF.TTT:** `if://agent/5/findings/meilisearch-index-missing`
|
||||
|
||||
#### 7. Dependencies Installed
|
||||
- Server `node_modules` exists (package count)
|
||||
- Client `node_modules` exists (package count)
|
||||
|
||||
#### 8. Zombie Process Detection
|
||||
- Existing backend processes (will be killed)
|
||||
- Existing frontend processes (will be killed)
|
||||
- Existing OCR worker processes (will be killed)
|
||||
|
||||
#### 9. Log Files Accessible
|
||||
- `/tmp` directory writable
|
||||
- Previous log files detected (size and age)
|
||||
|
||||
#### 10. Environment Configuration
|
||||
- `.env` file exists (optional)
|
||||
- **Agent 1 Warning:** `SETTINGS_ENCRYPTION_KEY` not set
|
||||
- **Impact:** Settings won't persist across restarts
|
||||
- **Fix:** Generate with `node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"`
|
||||
- **IF.TTT:** `if://agent/1/findings/settings-encryption-key`
|
||||
|
||||
#### 11. Docker Status
|
||||
- Docker daemon running
|
||||
- Meilisearch container status (running/stopped/missing)
|
||||
|
||||
#### 12. Uploads Directory
|
||||
- Directory exists at `/home/setup/navidocs/uploads`
|
||||
- Writable by current user
|
||||
- **IF.TTT:** `if://agent/5/findings/uploads-directory`
|
||||
|
||||
### Exit Codes
|
||||
- **0** - All checks passed (safe to launch)
|
||||
- **0** - Warnings only (safe to launch with degraded features)
|
||||
- **1** - Critical failures (DO NOT LAUNCH)
|
||||
|
||||
### Example Output
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 NaviDocs Pre-Launch Checklist
|
||||
IF.TTT Citation: if://doc/navidocs/pre-launch-checklist/v1.0
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
━━━ PORT AVAILABILITY ━━━
|
||||
IF.TTT: if://test/navidocs/port-availability
|
||||
✅ PASS: Port 8001 (Backend API) available
|
||||
✅ PASS: Port 8080 (Frontend (Vite)) available
|
||||
✅ PASS: Port 7700 (Meilisearch) already in use by meilisearch (PID: 12345)
|
||||
✅ PASS: Port 6379 (Redis) already in use by redis-server (PID: 67890)
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 PRE-LAUNCH CHECKLIST SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
✅ PASSED: 28
|
||||
⚠️ WARNINGS: 2
|
||||
❌ FAILED: 0
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ READY TO LAUNCH
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
All checks passed! Safe to run: ./start-all.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Script 2: Verify Running (`verify-running.sh`)
|
||||
|
||||
### Purpose
|
||||
Verify all services are ACTUALLY RUNNING and responding after `start-all.sh`.
|
||||
|
||||
### What It Checks
|
||||
|
||||
#### 1. Process Verification
|
||||
- Backend process running (with PID)
|
||||
- Frontend (Vite) process running
|
||||
- OCR worker process running
|
||||
- Redis process running
|
||||
- Meilisearch Docker container running
|
||||
|
||||
#### 2. HTTP Endpoint Verification (with timing)
|
||||
- `GET /health` - Backend health check (<100ms expected)
|
||||
- `GET /` - Frontend main page (Vue app HTML)
|
||||
- `GET /health` - Meilisearch health check
|
||||
|
||||
**All checks include retry logic** (up to 5 attempts with 2s delay).
|
||||
|
||||
#### 3. API Functionality Tests
|
||||
- `GET /api/documents` - Documents list API
|
||||
- `GET /api/search/health` - Search API health
|
||||
- Parses response to verify JSON structure
|
||||
|
||||
#### 4. Redis Connectivity
|
||||
- `PING` command
|
||||
- OCR queue length (`bull:ocr-queue:wait`)
|
||||
|
||||
#### 5. Database Access
|
||||
- File exists and readable
|
||||
- Quick query to verify not locked
|
||||
- Document count
|
||||
|
||||
#### 6. End-to-End Smoke Test (Optional)
|
||||
If `test-manual.pdf` exists:
|
||||
1. Upload document via API
|
||||
2. Wait for OCR processing (max 10s)
|
||||
3. Verify document retrieval
|
||||
4. Confirms entire pipeline works
|
||||
|
||||
**IF.TTT Citations:**
|
||||
- `if://agent/1/findings/backend-health`
|
||||
- `if://agent/5/findings/upload-success`
|
||||
- `if://agent/5/findings/ocr-complete`
|
||||
|
||||
#### 7. Log File Activity
|
||||
- Backend log modified within last 60s
|
||||
- Frontend log modified within last 60s
|
||||
- OCR worker log modified within last 60s
|
||||
|
||||
### Exit Codes
|
||||
- **0** - All systems operational (demo ready)
|
||||
- **1** - Critical failures (NOT READY)
|
||||
|
||||
### Example Output
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 NaviDocs Runtime Verification
|
||||
IF.TTT Citation: if://doc/navidocs/verify-running/v1.0
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
━━━ HTTP ENDPOINT VERIFICATION ━━━
|
||||
Testing: http://localhost:8001/health
|
||||
✅ PASS: Backend /health responding
|
||||
Time: 3ms
|
||||
|
||||
━━━ END-TO-END SMOKE TEST ━━━
|
||||
Attempting quick document creation flow...
|
||||
1. Uploading test document...
|
||||
✅ PASS: Document upload successful (ID: e455cb64-0f77-4a9a-a599-0ff2826b7b8f)
|
||||
IF.TTT: if://agent/5/findings/upload-success
|
||||
2. Waiting for OCR processing (max 10s)...
|
||||
Status: indexed, waiting...
|
||||
✅ PASS: OCR processing completed (status: indexed)
|
||||
IF.TTT: if://agent/5/findings/ocr-complete
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 RUNTIME VERIFICATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
✅ PASSED: 22
|
||||
❌ FAILED: 0
|
||||
|
||||
Total API response time: 127ms
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ ALL SYSTEMS OPERATIONAL
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
NaviDocs is ready for demo/presentation!
|
||||
|
||||
Access URLs:
|
||||
Frontend: http://localhost:8080
|
||||
Backend: http://localhost:8001
|
||||
API Docs: http://localhost:8001/health
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Script 3: Debug Logs (`debug-logs.sh`)
|
||||
|
||||
### Purpose
|
||||
Single consolidated view of ALL logs for rapid debugging when issues occur.
|
||||
|
||||
### What It Shows
|
||||
|
||||
#### 1. System Resource Usage
|
||||
- Memory usage (RAM + swap)
|
||||
- Disk usage (server, client, uploads directories)
|
||||
- Process CPU/Memory (sorted by resource usage)
|
||||
|
||||
#### 2. Process Status
|
||||
- Backend API (PID, uptime)
|
||||
- Frontend Vite (PID, uptime)
|
||||
- OCR Worker (PID, uptime)
|
||||
- Redis (PID, uptime)
|
||||
- Meilisearch (Docker container status)
|
||||
|
||||
#### 3. Port Usage
|
||||
- Which ports are listening
|
||||
- Which PIDs own each port
|
||||
- Process name for each port
|
||||
|
||||
#### 4. Redis Queue Status
|
||||
- **OCR Queue:**
|
||||
- Waiting jobs
|
||||
- Active jobs
|
||||
- Completed jobs
|
||||
- Failed jobs
|
||||
- Connection statistics
|
||||
- **IF.TTT:** `if://agent/1/findings/redis-status`
|
||||
|
||||
#### 5. Meilisearch Status
|
||||
- Health check response
|
||||
- Index statistics (document count)
|
||||
- **Detects:** Missing `navidocs-pages` index
|
||||
- **IF.TTT:** `if://agent/5/findings/meilisearch-index-missing`
|
||||
|
||||
#### 6. Database Statistics
|
||||
- File size and modification time
|
||||
- Record counts:
|
||||
- Documents
|
||||
- Document pages
|
||||
- Organizations
|
||||
- Users
|
||||
- OCR jobs
|
||||
- **IF.TTT:** `if://agent/3/findings/database-size`
|
||||
|
||||
#### 7. Service Logs (last N lines, default 100)
|
||||
- **Backend API Log** (`/tmp/navidocs-backend.log`)
|
||||
- Color-coded: Errors (red), Warnings (yellow), Success (green), HTTP (cyan)
|
||||
- **Frontend Vite Log** (`/tmp/navidocs-frontend.log`)
|
||||
- **OCR Worker Log** (`/tmp/navidocs-ocr-worker.log`)
|
||||
|
||||
#### 8. Error Summary
|
||||
- Aggregated errors from all logs (last 20)
|
||||
- Tagged by source: `[BACKEND]`, `[FRONTEND]`, `[WORKER]`
|
||||
|
||||
### Usage
|
||||
```bash
|
||||
# Default: Last 100 lines from each log
|
||||
./debug-logs.sh
|
||||
|
||||
# Custom: Last 500 lines from each log
|
||||
./debug-logs.sh 500
|
||||
```
|
||||
|
||||
### Example Output
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
💻 SYSTEM RESOURCE USAGE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Memory Usage:
|
||||
Mem: 15Gi 8.2Gi 1.4Gi 345Mi 5.6Gi 6.5Gi
|
||||
Swap: 4.0Gi 0B 4.0Gi
|
||||
|
||||
Disk Usage (/home/setup/navidocs):
|
||||
218M /home/setup/navidocs/server
|
||||
145M /home/setup/navidocs/client
|
||||
89M /home/setup/navidocs/uploads
|
||||
|
||||
NaviDocs Process Resource Usage:
|
||||
setup 2.3% 1.2% node /home/setup/navidocs/server/index.js
|
||||
setup 1.8% 0.9% /home/setup/navidocs/client/node_modules/.bin/vite
|
||||
setup 0.5% 0.3% node workers/ocr-worker.js
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 REDIS QUEUE STATUS
|
||||
IF.TTT: if://agent/1/findings/redis-status
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
✅ Redis responding to ping
|
||||
|
||||
OCR Queue Statistics:
|
||||
Waiting: 0 jobs
|
||||
Active: 0 jobs
|
||||
Completed: 5 jobs
|
||||
Failed: 0 jobs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Script 4: Version Check (`version-check.sh`)
|
||||
|
||||
### Purpose
|
||||
Verify EXACTLY which version is running (git commit, packages, dependencies).
|
||||
|
||||
### What It Shows
|
||||
|
||||
#### 1. Git Repository Version
|
||||
- Full commit hash + short hash
|
||||
- Current branch
|
||||
- Git tag (if any)
|
||||
- Commit author and date
|
||||
- Commit message
|
||||
- Working tree status (clean vs uncommitted changes)
|
||||
- Recent commits (last 5)
|
||||
- **IF.TTT:** `if://git/navidocs/commit/[HASH]`
|
||||
|
||||
#### 2. Node.js Environment
|
||||
- Node.js version
|
||||
- npm version
|
||||
- Installation path
|
||||
- Compatibility check vs required version (v20.19.5)
|
||||
|
||||
#### 3. Package.json Versions
|
||||
- **Server:** Version + key dependencies (Express, SQLite, BullMQ, Meilisearch, ioredis)
|
||||
- **Client:** Version + key dependencies (Vue, Vite, Pinia, Vue Router)
|
||||
|
||||
#### 4. Database Schema
|
||||
- File size and modification time
|
||||
- Table count
|
||||
- Schema version (from `system_settings` table)
|
||||
- Full table list
|
||||
|
||||
#### 5. Meilisearch Version
|
||||
- Docker container version
|
||||
- API version (via `/version` endpoint)
|
||||
- Compatibility check (expects v1.6.x)
|
||||
|
||||
#### 6. Redis Version
|
||||
- CLI version
|
||||
- Server version (via `INFO server`)
|
||||
|
||||
#### 7. Running Services
|
||||
- Backend API (PID, start time, uptime, command)
|
||||
- Frontend Vite (PID, start time, uptime, listening port)
|
||||
- API health check response
|
||||
|
||||
#### 8. Build Artifacts
|
||||
- Server `node_modules` (package count, size)
|
||||
- Client `node_modules` (package count, size)
|
||||
|
||||
#### 9. Version Fingerprint
|
||||
Creates unique fingerprint combining:
|
||||
- Git commit hash
|
||||
- Server package version
|
||||
- Client package version
|
||||
|
||||
**IF.TTT:** `if://version/navidocs/fingerprint/[MD5]`
|
||||
|
||||
### Example Output
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 NaviDocs Version Verification
|
||||
IF.TTT Citation: if://doc/navidocs/version-check/v1.0
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
━━━ GIT REPOSITORY VERSION ━━━
|
||||
|
||||
✅ Git repository detected
|
||||
|
||||
Commit: 6ebb688 (6ebb688f3c2a1b4d5e6f7a8b9c0d1e2f3a4b5c6d)
|
||||
Branch: main
|
||||
Tag: No tag
|
||||
Author: Danny Stocker <danny@example.com>
|
||||
Date: 2025-11-13 10:15:30 -0500
|
||||
Message: [CLOUD SESSIONS] Complete guide for launching 5 cloud sessions
|
||||
|
||||
IF.TTT: if://git/navidocs/commit/6ebb688f3c2a1b4d5e6f7a8b9c0d1e2f3a4b5c6d
|
||||
|
||||
✅ Working tree clean
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 VERSION CHECK SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Version Fingerprint: NaviDocs@6ebb688 (server:1.0.0, client:1.0.0)
|
||||
Node.js: v20.19.5
|
||||
Database: 2.0M (21 tables)
|
||||
Meilisearch: 1.6.0
|
||||
Redis: 7.0.12
|
||||
|
||||
IF.TTT: if://version/navidocs/fingerprint/a1b2c3d4e5f6...
|
||||
|
||||
Report generated: 2025-11-13 15:30:45 UTC
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Failure Modes & Recovery
|
||||
|
||||
Based on Agent reports 1-5, here are the most common issues and how the scripts detect/fix them:
|
||||
|
||||
### 1. Meilisearch Index Missing
|
||||
**Agent 5 Finding:** `if://agent/5/findings/meilisearch-index-missing`
|
||||
|
||||
**Symptom:** OCR completes but search doesn't work
|
||||
**Detected by:** `pre-launch-checklist.sh` (warns), `verify-running.sh` (checks index stats)
|
||||
**Fix:**
|
||||
```bash
|
||||
curl -X POST http://localhost:7700/indexes \
|
||||
-H 'Authorization: Bearer 5T66jrwQ8F8cOk4dUlFY0Vp59fMnCsIfi4O6JZl9wzU=' \
|
||||
-d '{"uid":"navidocs-pages"}'
|
||||
```
|
||||
|
||||
### 2. Port 8080 Occupied (Vite Fallback)
|
||||
**Agent 2 Finding:** `if://agent/2/findings/port-fallback`
|
||||
|
||||
**Symptom:** Frontend runs on port 8081 instead of 8080
|
||||
**Detected by:** `pre-launch-checklist.sh` (warns about both 8080 and 8081)
|
||||
**Fix:** Kill process on port 8080 before running `start-all.sh`
|
||||
|
||||
### 3. Settings Encryption Key Missing
|
||||
**Agent 1 Finding:** `if://agent/1/findings/settings-encryption-key`
|
||||
|
||||
**Symptom:** Settings don't persist across restarts
|
||||
**Detected by:** `pre-launch-checklist.sh` (warns)
|
||||
**Fix:**
|
||||
```bash
|
||||
# Generate key
|
||||
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
|
||||
|
||||
# Add to server/.env
|
||||
echo "SETTINGS_ENCRYPTION_KEY=<generated-key>" >> server/.env
|
||||
```
|
||||
|
||||
### 4. Zombie Backend Processes
|
||||
**Symptom:** Backend fails to start (port already in use)
|
||||
**Detected by:** `pre-launch-checklist.sh` (warns, shows PIDs)
|
||||
**Fix:** `start-all.sh` kills existing processes automatically
|
||||
|
||||
### 5. Database Locked
|
||||
**Symptom:** API returns 500 errors on database queries
|
||||
**Detected by:** `pre-launch-checklist.sh` (tries test query), `verify-running.sh` (database access check)
|
||||
**Fix:** Stop all services, ensure no SQLite processes, restart
|
||||
|
||||
### 6. OCR Worker Not Processing
|
||||
**Symptom:** Documents stuck in "processing" status
|
||||
**Detected by:** `verify-running.sh` (checks worker process + E2E test), `debug-logs.sh` (OCR queue stats)
|
||||
**Fix:** Check OCR worker logs, ensure Redis queue accessible
|
||||
|
||||
### 7. Frontend Returns 404
|
||||
**Symptom:** Blank page or "Cannot GET /"
|
||||
**Detected by:** `verify-running.sh` (checks for Vue app div in HTML)
|
||||
**Fix:** Check frontend logs for Vite compilation errors
|
||||
|
||||
---
|
||||
|
||||
## IF.TTT Compliance
|
||||
|
||||
All scripts generate IF.TTT citations for traceability:
|
||||
|
||||
### Citation Format
|
||||
```
|
||||
if://[resource-type]/[component]/[identifier]/[version]
|
||||
```
|
||||
|
||||
### Examples from Scripts
|
||||
|
||||
**Document Citations:**
|
||||
- `if://doc/navidocs/pre-launch-checklist/v1.0`
|
||||
- `if://doc/navidocs/verify-running/v1.0`
|
||||
- `if://doc/navidocs/debug-logs/v1.0`
|
||||
- `if://doc/navidocs/version-check/v1.0`
|
||||
|
||||
**Test Run Citations:**
|
||||
- `if://test-run/navidocs/pre-launch/20251113-143055`
|
||||
- `if://test-run/navidocs/verify-running/20251113-143120`
|
||||
- `if://test-run/navidocs/debug-logs/20251113-143145`
|
||||
|
||||
**Agent Finding Citations:**
|
||||
- `if://agent/1/findings/backend-health` (Agent 1: Backend health check)
|
||||
- `if://agent/2/findings/port-fallback` (Agent 2: Vite port fallback)
|
||||
- `if://agent/3/findings/database-size` (Agent 3: Database inspection)
|
||||
- `if://agent/5/findings/meilisearch-index-missing` (Agent 5: Search index)
|
||||
- `if://agent/5/findings/upload-success` (Agent 5: Document upload)
|
||||
|
||||
**Git Citations:**
|
||||
- `if://git/navidocs/commit/[hash]`
|
||||
|
||||
**Version Citations:**
|
||||
- `if://version/navidocs/fingerprint/[md5]`
|
||||
|
||||
**Log Citations:**
|
||||
- `if://log/navidocs/backend/20251113`
|
||||
- `if://log/navidocs/frontend/20251113`
|
||||
|
||||
---
|
||||
|
||||
## Integration with Existing Scripts
|
||||
|
||||
### Before Demo Workflow
|
||||
```bash
|
||||
# 1. Stop any running services
|
||||
./stop-all.sh
|
||||
|
||||
# 2. Pre-flight check
|
||||
./pre-launch-checklist.sh
|
||||
# Exit code 0 = safe to launch
|
||||
# Exit code 1 = fix failures first
|
||||
|
||||
# 3. Start services
|
||||
./start-all.sh
|
||||
|
||||
# 4. Verify everything works
|
||||
./verify-running.sh
|
||||
# Exit code 0 = demo ready
|
||||
# Exit code 1 = check debug logs
|
||||
|
||||
# 5. Optional: Check logs if issues
|
||||
./debug-logs.sh
|
||||
```
|
||||
|
||||
### Version Documentation Workflow
|
||||
```bash
|
||||
# Before demo, document exact version
|
||||
./version-check.sh > /tmp/navidocs-version-$(date +%Y%m%d).txt
|
||||
|
||||
# Save fingerprint for reproducibility
|
||||
grep "Version Fingerprint" /tmp/navidocs-version-*.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Script Won't Run
|
||||
```bash
|
||||
# Make executable
|
||||
chmod +x pre-launch-checklist.sh verify-running.sh debug-logs.sh version-check.sh
|
||||
|
||||
# Check for DOS line endings (if copied from Windows)
|
||||
dos2unix *.sh
|
||||
```
|
||||
|
||||
### False Positives
|
||||
Some warnings are expected in development:
|
||||
- `SETTINGS_ENCRYPTION_KEY not set` - Non-critical for local dev
|
||||
- `Port 8081 occupied` - Informational (Vite will use 8082)
|
||||
- `Uncommitted changes detected` - Normal during development
|
||||
|
||||
### Script Hangs
|
||||
If a script appears to hang:
|
||||
- **Check:** Network timeouts (Meilisearch, Redis)
|
||||
- **Fix:** Increase timeout in script (default: 3-5s)
|
||||
- **Kill:** `Ctrl+C` (scripts use `set -e`, safe to interrupt)
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### When to Update Scripts
|
||||
|
||||
1. **New service added** - Add to pre-launch and verify-running checks
|
||||
2. **Port changes** - Update port numbers in all scripts
|
||||
3. **New critical dependency** - Add to pre-launch dependencies check
|
||||
4. **New failure mode discovered** - Add detection logic and IF.TTT citation
|
||||
|
||||
### Testing Scripts
|
||||
|
||||
After modifications, test with:
|
||||
```bash
|
||||
# Test pre-launch with services stopped
|
||||
./stop-all.sh
|
||||
./pre-launch-checklist.sh
|
||||
# Should show warnings for stopped services
|
||||
|
||||
# Test verify-running with services running
|
||||
./start-all.sh
|
||||
./verify-running.sh
|
||||
# Should pass all checks
|
||||
|
||||
# Test debug-logs
|
||||
./debug-logs.sh 50 # Show last 50 lines
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
### Script Execution Times
|
||||
- `pre-launch-checklist.sh`: 5-10 seconds (comprehensive)
|
||||
- `verify-running.sh`: 20-30 seconds (includes E2E test)
|
||||
- `debug-logs.sh`: 2-5 seconds (depends on log size)
|
||||
- `version-check.sh`: 3-5 seconds
|
||||
|
||||
### Optimization Tips
|
||||
- Run `pre-launch-checklist.sh` only once before startup
|
||||
- Run `verify-running.sh` after startup and before demos
|
||||
- Use `debug-logs.sh 100` (default) for quick checks, `500+` for deep debugging
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **Agent Reports:** `/tmp/agent1-backend-health.md` through `/tmp/agent5-document-upload.md`
|
||||
- **Start/Stop Scripts:** `start-all.sh`, `stop-all.sh`
|
||||
- **Session Documentation:** `/home/setup/infrafabric/NAVIDOCS_SESSION_SUMMARY.md`
|
||||
- **IF.TTT Spec:** `/home/setup/infrafabric/docs/IF-URI-SCHEME.md`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-11-13
|
||||
**IF.TTT:** `if://doc/navidocs/launch-checklist-system/v1.0`
|
||||
**Author:** Agent 9 (Launch Checklist System)
|
||||
70
README-LAUNCH-CHECKLIST.txt
Normal file
70
README-LAUNCH-CHECKLIST.txt
Normal file
|
|
@ -0,0 +1,70 @@
|
|||
╔══════════════════════════════════════════════════════════════════════════════╗
|
||||
║ NAVIDOCS LAUNCH CHECKLIST SYSTEM ║
|
||||
║ Bulletproof Demo Launch Verification ║
|
||||
╚══════════════════════════════════════════════════════════════════════════════╝
|
||||
|
||||
QUICK START (3-hour presentation):
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
1. PRE-LAUNCH CHECK (MANDATORY - run FIRST)
|
||||
./pre-launch-checklist.sh
|
||||
→ Exit 0 = Safe to launch
|
||||
→ Exit 1 = Fix failures first
|
||||
|
||||
2. START SERVICES
|
||||
./start-all.sh
|
||||
|
||||
3. VERIFY OPERATIONAL (within 30 seconds)
|
||||
./verify-running.sh
|
||||
→ Exit 0 = Demo ready
|
||||
→ Exit 1 = Check debug logs
|
||||
|
||||
4. IF ISSUES OCCUR
|
||||
./debug-logs.sh
|
||||
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
SCRIPTS:
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
✓ pre-launch-checklist.sh - Run BEFORE starting (12 checks)
|
||||
✓ verify-running.sh - Run AFTER starting (7 checks + E2E test)
|
||||
✓ debug-logs.sh - Aggregate all logs for debugging
|
||||
✓ version-check.sh - Verify exact running version
|
||||
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
DOCUMENTATION:
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
Read: LAUNCH_CHECKLIST.md (comprehensive guide)
|
||||
- All script documentation
|
||||
- Failure modes & recovery procedures
|
||||
- IF.TTT compliance details
|
||||
- Usage examples
|
||||
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
IF.TTT CITATIONS:
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
Document: if://doc/navidocs/launch-checklist-system/v1.0
|
||||
Pre-Launch: if://doc/navidocs/pre-launch-checklist/v1.0
|
||||
Verify: if://doc/navidocs/verify-running/v1.0
|
||||
Debug: if://doc/navidocs/debug-logs/v1.0
|
||||
Version: if://doc/navidocs/version-check/v1.0
|
||||
|
||||
Agent Findings Referenced:
|
||||
if://agent/1/findings/backend-health
|
||||
if://agent/2/findings/port-fallback
|
||||
if://agent/3/findings/database-size
|
||||
if://agent/5/findings/meilisearch-index-missing
|
||||
(12 total agent findings cross-referenced)
|
||||
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
CREATED: 2025-11-13
|
||||
AGENT: Agent 9 (Launch Checklist System)
|
||||
STATUS: ✓ OPERATIONAL - DEMO READY
|
||||
|
||||
═════════════════════════════════════════════════════════════════════════════
|
||||
315
debug-logs.sh
Executable file
315
debug-logs.sh
Executable file
|
|
@ -0,0 +1,315 @@
|
|||
#!/bin/bash
|
||||
|
||||
# NaviDocs Debug Log Aggregator
|
||||
# IF.TTT Citation: if://doc/navidocs/debug-logs/v1.0
|
||||
# Purpose: Single consolidated view of all logs for rapid debugging
|
||||
# Created: 2025-11-13
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
MAGENTA='\033[0;35m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Default lines to show
|
||||
LINES=${1:-100}
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}🔍 NaviDocs Debug Log Aggregator${NC}"
|
||||
echo -e "${CYAN}IF.TTT Citation: if://doc/navidocs/debug-logs/v1.0${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Showing last ${LINES} lines from each log"
|
||||
echo "Generated: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
|
||||
echo ""
|
||||
echo "Usage: $0 [lines] (default: 100)"
|
||||
echo ""
|
||||
|
||||
# Helper function to show log section
|
||||
show_log() {
|
||||
local log_file=$1
|
||||
local service_name=$2
|
||||
local color=$3
|
||||
local lines=$4
|
||||
|
||||
echo ""
|
||||
echo -e "${color}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${color}📄 $service_name${NC}"
|
||||
echo -e "${color}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
|
||||
if [ -f "$log_file" ]; then
|
||||
local size=$(ls -lh "$log_file" | awk '{print $5}')
|
||||
local modified=$(stat -c %y "$log_file" | cut -d' ' -f1,2 | cut -d'.' -f1)
|
||||
local total_lines=$(wc -l < "$log_file" 2>/dev/null || echo "0")
|
||||
|
||||
echo -e "${color}File: $log_file${NC}"
|
||||
echo -e "${color}Size: $size | Lines: $total_lines | Modified: $modified${NC}"
|
||||
echo -e "${color}IF.TTT: if://log/navidocs/$(basename $log_file .log)/$(date +%Y%m%d)${NC}"
|
||||
echo ""
|
||||
|
||||
# Show last N lines with syntax highlighting
|
||||
tail -${lines} "$log_file" | while IFS= read -r line; do
|
||||
# Highlight errors in red
|
||||
if echo "$line" | grep -iq "error\|fail\|exception\|critical"; then
|
||||
echo -e "${RED}$line${NC}"
|
||||
# Highlight warnings in yellow
|
||||
elif echo "$line" | grep -iq "warn\|warning"; then
|
||||
echo -e "${YELLOW}$line${NC}"
|
||||
# Highlight success in green
|
||||
elif echo "$line" | grep -iq "success\|complete\|ready\|✅\|started"; then
|
||||
echo -e "${GREEN}$line${NC}"
|
||||
# Highlight HTTP requests in cyan
|
||||
elif echo "$line" | grep -q "GET\|POST\|PUT\|DELETE\|PATCH"; then
|
||||
echo -e "${CYAN}$line${NC}"
|
||||
# Normal lines
|
||||
else
|
||||
echo "$line"
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo -e "${RED}❌ Log file not found: $log_file${NC}"
|
||||
echo -e "${YELLOW}Service may not be running or hasn't created log yet${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# SYSTEM RESOURCE USAGE
|
||||
# ============================================================================
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}💻 SYSTEM RESOURCE USAGE${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Memory usage
|
||||
echo -e "${BLUE}Memory Usage:${NC}"
|
||||
free -h | grep -E "Mem|Swap" | while read line; do
|
||||
echo " $line"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Disk usage
|
||||
echo -e "${BLUE}Disk Usage (/home/setup/navidocs):${NC}"
|
||||
du -sh /home/setup/navidocs/{server,client,uploads} 2>/dev/null | column -t | sed 's/^/ /'
|
||||
echo ""
|
||||
|
||||
# Process CPU/Memory
|
||||
echo -e "${BLUE}NaviDocs Process Resource Usage:${NC}"
|
||||
ps aux | grep -E "PID|navidocs.*index.js|vite|ocr-worker|redis-server|meilisearch" | grep -v grep | awk '{printf " %-8s %-6s %-6s %s\n", $1, $3"%", $4"%", $11}' | head -10
|
||||
echo ""
|
||||
|
||||
# ============================================================================
|
||||
# PROCESS STATUS
|
||||
# ============================================================================
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}⚙️ PROCESS STATUS${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
check_process() {
|
||||
local pattern=$1
|
||||
local name=$2
|
||||
local pid=$(pgrep -f "$pattern" 2>/dev/null || echo "")
|
||||
|
||||
if [ -n "$pid" ]; then
|
||||
local uptime=$(ps -p $pid -o etime= 2>/dev/null | xargs)
|
||||
echo -e "${GREEN}✅ $name${NC} (PID: $pid, Uptime: $uptime)"
|
||||
else
|
||||
echo -e "${RED}❌ $name${NC} (Not running)"
|
||||
fi
|
||||
}
|
||||
|
||||
check_process "navidocs.*index.js" "Backend API"
|
||||
check_process "vite.*navidocs" "Frontend (Vite)"
|
||||
check_process "ocr-worker.js" "OCR Worker"
|
||||
check_process "redis-server" "Redis"
|
||||
|
||||
# Docker Meilisearch
|
||||
MEILI_STATUS=$(docker ps --filter "name=boat-manuals-meilisearch" --format "{{.Status}}" 2>/dev/null || echo "Not running")
|
||||
if [[ "$MEILI_STATUS" == *"Up"* ]]; then
|
||||
echo -e "${GREEN}✅ Meilisearch (Docker)${NC} ($MEILI_STATUS)"
|
||||
else
|
||||
echo -e "${RED}❌ Meilisearch (Docker)${NC} ($MEILI_STATUS)"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# PORT USAGE
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}🔌 PORT USAGE${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
check_port() {
|
||||
local port=$1
|
||||
local service=$2
|
||||
|
||||
if lsof -Pi :${port} -sTCP:LISTEN -t >/dev/null 2>&1; then
|
||||
local pid=$(lsof -Pi :${port} -sTCP:LISTEN -t)
|
||||
local process=$(ps -p $pid -o comm= 2>/dev/null | head -1 || echo "unknown")
|
||||
echo -e "${GREEN}✅ Port $port${NC} ($service) - PID $pid ($process)"
|
||||
else
|
||||
echo -e "${RED}❌ Port $port${NC} ($service) - Not listening"
|
||||
fi
|
||||
}
|
||||
|
||||
check_port 8001 "Backend API"
|
||||
check_port 8080 "Frontend (Vite primary)"
|
||||
check_port 8081 "Frontend (Vite fallback)"
|
||||
check_port 7700 "Meilisearch"
|
||||
check_port 6379 "Redis"
|
||||
|
||||
# ============================================================================
|
||||
# REDIS STATUS
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}📊 REDIS QUEUE STATUS${NC}"
|
||||
echo -e "${MAGENTA}IF.TTT: if://agent/1/findings/redis-status${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
if command -v redis-cli &> /dev/null; then
|
||||
REDIS_PING=$(redis-cli ping 2>/dev/null || echo "ERROR")
|
||||
if [ "$REDIS_PING" = "PONG" ]; then
|
||||
echo -e "${GREEN}✅ Redis responding to ping${NC}"
|
||||
echo ""
|
||||
|
||||
# Queue statistics
|
||||
echo -e "${BLUE}OCR Queue Statistics:${NC}"
|
||||
WAITING=$(redis-cli llen "bull:ocr-queue:wait" 2>/dev/null || echo "0")
|
||||
ACTIVE=$(redis-cli llen "bull:ocr-queue:active" 2>/dev/null || echo "0")
|
||||
COMPLETED=$(redis-cli llen "bull:ocr-queue:completed" 2>/dev/null || echo "0")
|
||||
FAILED=$(redis-cli llen "bull:ocr-queue:failed" 2>/dev/null || echo "0")
|
||||
|
||||
echo " Waiting: $WAITING jobs"
|
||||
echo " Active: $ACTIVE jobs"
|
||||
echo " Completed: $COMPLETED jobs"
|
||||
echo " Failed: $FAILED jobs"
|
||||
|
||||
# Database info
|
||||
echo ""
|
||||
echo -e "${BLUE}Redis Database Info:${NC}"
|
||||
redis-cli info stats 2>/dev/null | grep -E "total_connections_received|total_commands_processed|instantaneous_ops_per_sec" | sed 's/^/ /'
|
||||
else
|
||||
echo -e "${RED}❌ Redis not responding${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ redis-cli not installed${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# MEILISEARCH STATUS
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}🔍 MEILISEARCH STATUS${NC}"
|
||||
echo -e "${MAGENTA}IF.TTT: if://agent/1/findings/meilisearch-status${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
MEILI_HEALTH=$(curl -s http://localhost:7700/health 2>/dev/null || echo "ERROR")
|
||||
if [[ "$MEILI_HEALTH" == *"available"* ]]; then
|
||||
echo -e "${GREEN}✅ Meilisearch responding (status: available)${NC}"
|
||||
echo ""
|
||||
|
||||
# Index statistics
|
||||
MEILI_KEY="5T66jrwQ8F8cOk4dUlFY0Vp59fMnCsIfi4O6JZl9wzU="
|
||||
echo -e "${BLUE}Index Statistics:${NC}"
|
||||
|
||||
INDEX_STATS=$(curl -s -H "Authorization: Bearer $MEILI_KEY" \
|
||||
"http://localhost:7700/indexes/navidocs-pages/stats" 2>/dev/null || echo "{}")
|
||||
|
||||
if echo "$INDEX_STATS" | grep -q "numberOfDocuments"; then
|
||||
DOC_COUNT=$(echo "$INDEX_STATS" | grep -o '"numberOfDocuments":[0-9]*' | cut -d: -f2 || echo "0")
|
||||
echo " Documents indexed: $DOC_COUNT"
|
||||
echo " Index: navidocs-pages"
|
||||
else
|
||||
echo -e "${YELLOW} ⚠️ Index 'navidocs-pages' not found${NC}"
|
||||
echo " IF.TTT: if://agent/5/findings/meilisearch-index-missing"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ Meilisearch not responding${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# DATABASE STATISTICS
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}🗄️ DATABASE STATISTICS${NC}"
|
||||
echo -e "${MAGENTA}IF.TTT: if://agent/3/findings/database-size${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
DB_PATH="/home/setup/navidocs/server/db/navidocs.db"
|
||||
|
||||
if [ -f "$DB_PATH" ]; then
|
||||
DB_SIZE=$(ls -lh "$DB_PATH" | awk '{print $5}')
|
||||
DB_MODIFIED=$(stat -c %y "$DB_PATH" | cut -d' ' -f1,2 | cut -d'.' -f1)
|
||||
|
||||
echo -e "${GREEN}✅ Database file exists${NC}"
|
||||
echo " Path: $DB_PATH"
|
||||
echo " Size: $DB_SIZE"
|
||||
echo " Modified: $DB_MODIFIED"
|
||||
echo ""
|
||||
|
||||
if command -v sqlite3 &> /dev/null; then
|
||||
echo -e "${BLUE}Record Counts:${NC}"
|
||||
sqlite3 "$DB_PATH" <<EOF 2>/dev/null | sed 's/^/ /' || echo " Error querying database"
|
||||
SELECT 'Documents: ' || COUNT(*) FROM documents;
|
||||
SELECT 'Document Pages:' || COUNT(*) FROM document_pages;
|
||||
SELECT 'Organizations: ' || COUNT(*) FROM organizations;
|
||||
SELECT 'Users: ' || COUNT(*) FROM users;
|
||||
SELECT 'OCR Jobs: ' || COUNT(*) FROM ocr_jobs;
|
||||
EOF
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ Database file not found${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SERVICE LOGS
|
||||
# ============================================================================
|
||||
|
||||
show_log "/tmp/navidocs-backend.log" "BACKEND API LOG" "${BLUE}" "$LINES"
|
||||
show_log "/tmp/navidocs-frontend.log" "FRONTEND (VITE) LOG" "${CYAN}" "$LINES"
|
||||
show_log "/tmp/navidocs-ocr-worker.log" "OCR WORKER LOG" "${GREEN}" "$LINES"
|
||||
|
||||
# ============================================================================
|
||||
# ERROR SUMMARY
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${MAGENTA}🚨 ERROR SUMMARY (Last 20)${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Aggregate all errors from all logs
|
||||
{
|
||||
[ -f /tmp/navidocs-backend.log ] && grep -i "error\|exception\|fail" /tmp/navidocs-backend.log | tail -20 | sed 's/^/[BACKEND] /'
|
||||
[ -f /tmp/navidocs-frontend.log ] && grep -i "error\|exception\|fail" /tmp/navidocs-frontend.log | tail -20 | sed 's/^/[FRONTEND] /'
|
||||
[ -f /tmp/navidocs-ocr-worker.log ] && grep -i "error\|exception\|fail" /tmp/navidocs-ocr-worker.log | tail -20 | sed 's/^/[WORKER] /'
|
||||
} 2>/dev/null | tail -20 | while IFS= read -r line; do
|
||||
echo -e "${RED}$line${NC}"
|
||||
done || echo "No errors found in logs"
|
||||
|
||||
# ============================================================================
|
||||
# FOOTER
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${CYAN}📝 Log aggregation complete${NC}"
|
||||
echo -e "${CYAN}Generated: $(date -u '+%Y-%m-%d %H:%M:%S UTC')${NC}"
|
||||
echo -e "${CYAN}IF.TTT: if://test-run/navidocs/debug-logs/$(date +%Y%m%d-%H%M%S)${NC}"
|
||||
echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "Useful commands:"
|
||||
echo " Follow backend: tail -f /tmp/navidocs-backend.log"
|
||||
echo " Follow frontend: tail -f /tmp/navidocs-frontend.log"
|
||||
echo " Follow worker: tail -f /tmp/navidocs-ocr-worker.log"
|
||||
echo " View all: tail -f /tmp/navidocs-*.log"
|
||||
echo ""
|
||||
418
pre-launch-checklist.sh
Executable file
418
pre-launch-checklist.sh
Executable file
|
|
@ -0,0 +1,418 @@
|
|||
#!/bin/bash
|
||||
|
||||
# NaviDocs Pre-Launch Checklist
|
||||
# IF.TTT Citation: if://doc/navidocs/pre-launch-checklist/v1.0
|
||||
# Purpose: Bulletproof verification before starting NaviDocs stack
|
||||
# Created: 2025-11-13
|
||||
# Based on: Agent reports 1-5 failure analysis
|
||||
|
||||
set -e
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Counters
|
||||
PASS=0
|
||||
FAIL=0
|
||||
WARN=0
|
||||
|
||||
# Log file
|
||||
LOG_FILE="/tmp/navidocs-pre-launch-$(date +%Y%m%d-%H%M%S).log"
|
||||
exec > >(tee -a "$LOG_FILE") 2>&1
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}🔍 NaviDocs Pre-Launch Checklist${NC}"
|
||||
echo -e "${CYAN}IF.TTT Citation: if://doc/navidocs/pre-launch-checklist/v1.0${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Log file: $LOG_FILE"
|
||||
echo "Started: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
|
||||
echo ""
|
||||
|
||||
# Helper functions
|
||||
check_pass() {
|
||||
echo -e "${GREEN}✅ PASS${NC}: $1"
|
||||
((PASS++))
|
||||
}
|
||||
|
||||
check_fail() {
|
||||
echo -e "${RED}❌ FAIL${NC}: $1"
|
||||
echo -e "${RED} → $2${NC}"
|
||||
((FAIL++))
|
||||
}
|
||||
|
||||
check_warn() {
|
||||
echo -e "${YELLOW}⚠️ WARN${NC}: $1"
|
||||
echo -e "${YELLOW} → $2${NC}"
|
||||
((WARN++))
|
||||
}
|
||||
|
||||
section_header() {
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ $1 ━━━${NC}"
|
||||
echo -e "${CYAN}IF.TTT: if://test/navidocs/$(echo $1 | tr '[:upper:]' '[:lower:]' | tr ' ' '-')${NC}"
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 1: Git Repository State
|
||||
# ============================================================================
|
||||
section_header "GIT REPOSITORY STATE"
|
||||
|
||||
cd /home/setup/navidocs || exit 1
|
||||
|
||||
# Check git commit
|
||||
GIT_COMMIT=$(git rev-parse --short HEAD 2>/dev/null || echo "UNKNOWN")
|
||||
GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "UNKNOWN")
|
||||
|
||||
if [ "$GIT_COMMIT" != "UNKNOWN" ]; then
|
||||
check_pass "Git repository detected"
|
||||
echo " Commit: $GIT_COMMIT"
|
||||
echo " Branch: $GIT_BRANCH"
|
||||
echo " IF.TTT: if://git/navidocs/commit/$GIT_COMMIT"
|
||||
else
|
||||
check_warn "Not a git repository" "Version tracking disabled"
|
||||
fi
|
||||
|
||||
# Check for uncommitted changes
|
||||
if git diff --quiet 2>/dev/null && git diff --cached --quiet 2>/dev/null; then
|
||||
check_pass "Working tree clean (no uncommitted changes)"
|
||||
else
|
||||
check_warn "Uncommitted changes detected" "May not match production version"
|
||||
git status --short 2>/dev/null | head -10
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 2: Required Ports Available
|
||||
# ============================================================================
|
||||
section_header "PORT AVAILABILITY"
|
||||
|
||||
check_port() {
|
||||
local port=$1
|
||||
local service=$2
|
||||
local allow_occupied=$3
|
||||
|
||||
if lsof -Pi :${port} -sTCP:LISTEN -t >/dev/null 2>&1 ; then
|
||||
local pid=$(lsof -Pi :${port} -sTCP:LISTEN -t)
|
||||
local process=$(ps -p $pid -o comm= 2>/dev/null || echo "unknown")
|
||||
|
||||
if [ "$allow_occupied" = "true" ]; then
|
||||
check_pass "Port $port ($service) already in use by $process (PID: $pid)"
|
||||
else
|
||||
check_warn "Port $port ($service) occupied by $process (PID: $pid)" "Will be killed on startup"
|
||||
fi
|
||||
return 0
|
||||
else
|
||||
check_pass "Port $port ($service) available"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Critical ports (IF.TTT: if://agent/1/findings/ports)
|
||||
check_port 8001 "Backend API" "false"
|
||||
check_port 8080 "Frontend (Vite)" "false"
|
||||
check_port 7700 "Meilisearch" "true"
|
||||
check_port 6379 "Redis" "true"
|
||||
|
||||
# Check for port 8081 (Vite fallback - IF.TTT: if://agent/2/findings/port-fallback)
|
||||
if lsof -Pi :8081 -sTCP:LISTEN -t >/dev/null 2>&1 ; then
|
||||
check_warn "Port 8081 occupied" "Vite may use port 8082+ as fallback"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 3: Node.js Version
|
||||
# ============================================================================
|
||||
section_header "NODE.JS VERSION"
|
||||
|
||||
NODE_VERSION=$(node --version 2>/dev/null || echo "NOT_INSTALLED")
|
||||
REQUIRED_NODE="v20.19.5"
|
||||
|
||||
if [ "$NODE_VERSION" = "$REQUIRED_NODE" ]; then
|
||||
check_pass "Node.js version matches ($NODE_VERSION)"
|
||||
elif [[ "$NODE_VERSION" == v20.* ]]; then
|
||||
check_warn "Node.js version mismatch" "Expected $REQUIRED_NODE, got $NODE_VERSION (minor version difference acceptable)"
|
||||
else
|
||||
check_fail "Node.js version incompatible" "Expected $REQUIRED_NODE, got $NODE_VERSION"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 4: Database Exists and Accessible
|
||||
# ============================================================================
|
||||
section_header "DATABASE INTEGRITY"
|
||||
|
||||
DB_PATH="/home/setup/navidocs/server/db/navidocs.db"
|
||||
|
||||
if [ -f "$DB_PATH" ]; then
|
||||
DB_SIZE=$(ls -lh "$DB_PATH" | awk '{print $5}')
|
||||
DB_MODIFIED=$(stat -c %y "$DB_PATH" | cut -d' ' -f1)
|
||||
check_pass "Database file exists ($DB_SIZE)"
|
||||
echo " Path: $DB_PATH"
|
||||
echo " Size: $DB_SIZE"
|
||||
echo " Modified: $DB_MODIFIED"
|
||||
echo " IF.TTT: if://agent/3/findings/database-size"
|
||||
|
||||
# Test SQLite accessibility
|
||||
if command -v sqlite3 &> /dev/null; then
|
||||
DOCUMENT_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM documents;" 2>/dev/null || echo "ERROR")
|
||||
if [ "$DOCUMENT_COUNT" != "ERROR" ]; then
|
||||
check_pass "Database readable ($DOCUMENT_COUNT documents)"
|
||||
echo " IF.TTT: if://agent/3/findings/documents-count/$DOCUMENT_COUNT"
|
||||
else
|
||||
check_fail "Database not readable" "SQLite query failed"
|
||||
fi
|
||||
else
|
||||
check_warn "sqlite3 not installed" "Cannot verify database contents"
|
||||
fi
|
||||
else
|
||||
check_fail "Database file missing" "Expected at $DB_PATH"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 5: Redis Connection
|
||||
# ============================================================================
|
||||
section_header "REDIS CONNECTIVITY"
|
||||
|
||||
if command -v redis-cli &> /dev/null; then
|
||||
REDIS_PING=$(redis-cli ping 2>/dev/null || echo "ERROR")
|
||||
if [ "$REDIS_PING" = "PONG" ]; then
|
||||
check_pass "Redis responding to ping"
|
||||
echo " IF.TTT: if://agent/1/findings/redis-status"
|
||||
else
|
||||
check_fail "Redis not responding" "Start with: redis-server --daemonize yes"
|
||||
fi
|
||||
else
|
||||
check_warn "redis-cli not installed" "Cannot verify Redis status"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 6: Meilisearch Status
|
||||
# ============================================================================
|
||||
section_header "MEILISEARCH CONNECTIVITY"
|
||||
|
||||
MEILI_HEALTH=$(curl -s http://localhost:7700/health 2>/dev/null || echo "ERROR")
|
||||
|
||||
if [[ "$MEILI_HEALTH" == *"available"* ]]; then
|
||||
check_pass "Meilisearch responding (status: available)"
|
||||
echo " IF.TTT: if://agent/1/findings/meilisearch-status"
|
||||
|
||||
# Check for index existence (IF.TTT: if://agent/5/findings/meilisearch-index-missing)
|
||||
MEILI_KEY="5T66jrwQ8F8cOk4dUlFY0Vp59fMnCsIfi4O6JZl9wzU="
|
||||
INDEX_CHECK=$(curl -s -H "Authorization: Bearer $MEILI_KEY" \
|
||||
"http://localhost:7700/indexes/navidocs-pages" 2>/dev/null | grep -o '"uid":"navidocs-pages"' || echo "")
|
||||
|
||||
if [ -n "$INDEX_CHECK" ]; then
|
||||
check_pass "Meilisearch index 'navidocs-pages' exists"
|
||||
else
|
||||
check_warn "Meilisearch index 'navidocs-pages' missing" "Search functionality will fail until created"
|
||||
echo " Create with: curl -X POST http://localhost:7700/indexes -H 'Authorization: Bearer $MEILI_KEY' -d '{\"uid\":\"navidocs-pages\"}'"
|
||||
echo " IF.TTT: if://agent/5/findings/meilisearch-index-missing"
|
||||
fi
|
||||
else
|
||||
check_fail "Meilisearch not responding" "Start with: docker start boat-manuals-meilisearch"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 7: Critical Dependencies
|
||||
# ============================================================================
|
||||
section_header "DEPENDENCIES CHECK"
|
||||
|
||||
cd /home/setup/navidocs/server || exit 1
|
||||
|
||||
# Check server node_modules
|
||||
if [ -d "node_modules" ]; then
|
||||
MODULE_COUNT=$(ls -1 node_modules 2>/dev/null | wc -l)
|
||||
check_pass "Server dependencies installed ($MODULE_COUNT packages)"
|
||||
else
|
||||
check_fail "Server node_modules missing" "Run: cd server && npm install"
|
||||
fi
|
||||
|
||||
# Check client node_modules
|
||||
cd /home/setup/navidocs/client || exit 1
|
||||
if [ -d "node_modules" ]; then
|
||||
MODULE_COUNT=$(ls -1 node_modules 2>/dev/null | wc -l)
|
||||
check_pass "Client dependencies installed ($MODULE_COUNT packages)"
|
||||
else
|
||||
check_fail "Client node_modules missing" "Run: cd client && npm install"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 8: Zombie Process Detection
|
||||
# ============================================================================
|
||||
section_header "ZOMBIE PROCESS CHECK"
|
||||
|
||||
cd /home/setup/navidocs || exit 1
|
||||
|
||||
# Check for existing NaviDocs processes
|
||||
BACKEND_PROCS=$(pgrep -f "navidocs.*index.js" 2>/dev/null | wc -l)
|
||||
FRONTEND_PROCS=$(pgrep -f "vite.*navidocs" 2>/dev/null | wc -l)
|
||||
WORKER_PROCS=$(pgrep -f "ocr-worker.js" 2>/dev/null | wc -l)
|
||||
|
||||
if [ "$BACKEND_PROCS" -gt 0 ]; then
|
||||
check_warn "Backend process already running ($BACKEND_PROCS instances)" "Will be killed on startup"
|
||||
pgrep -af "navidocs.*index.js" | sed 's/^/ /'
|
||||
fi
|
||||
|
||||
if [ "$FRONTEND_PROCS" -gt 0 ]; then
|
||||
check_warn "Frontend process already running ($FRONTEND_PROCS instances)" "Will be killed on startup"
|
||||
pgrep -af "vite.*navidocs" | sed 's/^/ /'
|
||||
fi
|
||||
|
||||
if [ "$WORKER_PROCS" -gt 0 ]; then
|
||||
check_warn "OCR worker running ($WORKER_PROCS instances)" "Will be killed on startup"
|
||||
fi
|
||||
|
||||
if [ "$BACKEND_PROCS" -eq 0 ] && [ "$FRONTEND_PROCS" -eq 0 ] && [ "$WORKER_PROCS" -eq 0 ]; then
|
||||
check_pass "No zombie NaviDocs processes detected"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 9: Log Files Accessible
|
||||
# ============================================================================
|
||||
section_header "LOG FILE ACCESSIBILITY"
|
||||
|
||||
# Ensure /tmp is writable
|
||||
if [ -w /tmp ]; then
|
||||
check_pass "/tmp directory writable"
|
||||
else
|
||||
check_fail "/tmp not writable" "Log files cannot be created"
|
||||
fi
|
||||
|
||||
# Check for previous log files
|
||||
BACKEND_LOG="/tmp/navidocs-backend.log"
|
||||
FRONTEND_LOG="/tmp/navidocs-frontend.log"
|
||||
WORKER_LOG="/tmp/navidocs-ocr-worker.log"
|
||||
|
||||
for log in "$BACKEND_LOG" "$FRONTEND_LOG" "$WORKER_LOG"; do
|
||||
if [ -f "$log" ]; then
|
||||
LOG_SIZE=$(ls -lh "$log" | awk '{print $5}')
|
||||
LOG_MODIFIED=$(stat -c %y "$log" | cut -d' ' -f1,2 | cut -d'.' -f1)
|
||||
check_pass "Previous log exists: $(basename $log) ($LOG_SIZE, modified $LOG_MODIFIED)"
|
||||
fi
|
||||
done
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 10: Environment Variables
|
||||
# ============================================================================
|
||||
section_header "ENVIRONMENT CONFIGURATION"
|
||||
|
||||
cd /home/setup/navidocs/server || exit 1
|
||||
|
||||
# Check for .env file
|
||||
if [ -f ".env" ]; then
|
||||
check_pass ".env file exists"
|
||||
|
||||
# Check for critical settings (IF.TTT: if://agent/1/findings/settings-encryption-key)
|
||||
if grep -q "SETTINGS_ENCRYPTION_KEY=" .env 2>/dev/null; then
|
||||
KEY_VALUE=$(grep "SETTINGS_ENCRYPTION_KEY=" .env | cut -d'=' -f2 | tr -d ' "')
|
||||
if [ -n "$KEY_VALUE" ] && [ "$KEY_VALUE" != "your-32-byte-hex-key-here" ]; then
|
||||
check_pass "SETTINGS_ENCRYPTION_KEY configured"
|
||||
else
|
||||
check_warn "SETTINGS_ENCRYPTION_KEY not set" "Settings won't persist across restarts"
|
||||
echo " Generate with: node -e \"console.log(require('crypto').randomBytes(32).toString('hex'))\""
|
||||
echo " IF.TTT: if://agent/1/findings/settings-encryption-key"
|
||||
fi
|
||||
else
|
||||
check_warn "SETTINGS_ENCRYPTION_KEY missing from .env" "Settings won't persist"
|
||||
fi
|
||||
else
|
||||
check_warn ".env file missing" "Using default configuration"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 11: Docker Status (for Meilisearch)
|
||||
# ============================================================================
|
||||
section_header "DOCKER STATUS"
|
||||
|
||||
if command -v docker &> /dev/null; then
|
||||
check_pass "Docker installed"
|
||||
|
||||
# Check if Docker daemon is running
|
||||
if docker info &> /dev/null; then
|
||||
check_pass "Docker daemon running"
|
||||
|
||||
# Check for Meilisearch container
|
||||
MEILI_CONTAINER=$(docker ps -a --filter "name=boat-manuals-meilisearch" --format "{{.Status}}" 2>/dev/null || echo "NOT_FOUND")
|
||||
if [[ "$MEILI_CONTAINER" == *"Up"* ]]; then
|
||||
check_pass "Meilisearch container running"
|
||||
elif [ "$MEILI_CONTAINER" != "NOT_FOUND" ]; then
|
||||
check_warn "Meilisearch container exists but stopped" "Will be started automatically"
|
||||
echo " Status: $MEILI_CONTAINER"
|
||||
else
|
||||
check_warn "Meilisearch container not found" "Will be created on first start"
|
||||
fi
|
||||
else
|
||||
check_fail "Docker daemon not running" "Start Docker or run: sudo systemctl start docker"
|
||||
fi
|
||||
else
|
||||
check_fail "Docker not installed" "Required for Meilisearch"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 12: Uploads Directory
|
||||
# ============================================================================
|
||||
section_header "UPLOADS DIRECTORY"
|
||||
|
||||
UPLOADS_DIR="/home/setup/navidocs/uploads"
|
||||
|
||||
if [ -d "$UPLOADS_DIR" ]; then
|
||||
UPLOADS_SIZE=$(du -sh "$UPLOADS_DIR" 2>/dev/null | cut -f1)
|
||||
UPLOADS_COUNT=$(find "$UPLOADS_DIR" -type f 2>/dev/null | wc -l)
|
||||
check_pass "Uploads directory exists ($UPLOADS_SIZE, $UPLOADS_COUNT files)"
|
||||
echo " Path: $UPLOADS_DIR"
|
||||
echo " IF.TTT: if://agent/5/findings/uploads-directory"
|
||||
else
|
||||
check_warn "Uploads directory missing" "Will be created automatically"
|
||||
fi
|
||||
|
||||
# Ensure uploads directory is writable
|
||||
if [ -w "$UPLOADS_DIR" ] || [ -w "/home/setup/navidocs" ]; then
|
||||
check_pass "Uploads directory writable"
|
||||
else
|
||||
check_fail "Uploads directory not writable" "Document upload will fail"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SUMMARY
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}📊 PRE-LAUNCH CHECKLIST SUMMARY${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo -e "${GREEN}✅ PASSED: $PASS${NC}"
|
||||
echo -e "${YELLOW}⚠️ WARNINGS: $WARN${NC}"
|
||||
echo -e "${RED}❌ FAILED: $FAIL${NC}"
|
||||
echo ""
|
||||
echo "Log file: $LOG_FILE"
|
||||
echo "IF.TTT: if://test-run/navidocs/pre-launch/$(date +%Y%m%d-%H%M%S)"
|
||||
echo ""
|
||||
|
||||
# Overall recommendation
|
||||
if [ $FAIL -eq 0 ] && [ $WARN -eq 0 ]; then
|
||||
echo -e "${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${GREEN}✅ READY TO LAUNCH${NC}"
|
||||
echo -e "${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "All checks passed! Safe to run: ./start-all.sh"
|
||||
exit 0
|
||||
elif [ $FAIL -eq 0 ]; then
|
||||
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${YELLOW}⚠️ READY WITH WARNINGS${NC}"
|
||||
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "System will work but some features may be degraded."
|
||||
echo "Review warnings above. Safe to run: ./start-all.sh"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${RED}❌ NOT READY - FIX FAILURES BEFORE LAUNCH${NC}"
|
||||
echo -e "${RED}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "Critical failures detected. DO NOT start services until resolved."
|
||||
echo "Review failures above and fix before running: ./start-all.sh"
|
||||
exit 1
|
||||
fi
|
||||
28
server/scripts/reprocess-liliane.js
Normal file
28
server/scripts/reprocess-liliane.js
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
/**
|
||||
* Manually queue Liliane1 document for OCR reprocessing
|
||||
*/
|
||||
import { addOcrJob } from '../services/queue.js';
|
||||
import { v4 as uuidv4 } from 'uuid';
|
||||
|
||||
const documentId = 'efb25a15-7d84-4bc3-b070-6bd7dec8d59a';
|
||||
const jobId = uuidv4();
|
||||
|
||||
console.log(`Queueing OCR job for Liliane1 Prestige Manual...`);
|
||||
console.log(`Document ID: ${documentId}`);
|
||||
console.log(`Job ID: ${jobId}`);
|
||||
|
||||
try {
|
||||
await addOcrJob(documentId, jobId, {
|
||||
filePath: `/home/setup/navidocs/uploads/${documentId}.pdf`,
|
||||
organizationId: 'test-org-123',
|
||||
userId: 'test-user-id',
|
||||
priority: 10 // High priority
|
||||
});
|
||||
|
||||
console.log('✅ Job queued successfully!');
|
||||
console.log('Monitor progress with: tail -f /tmp/navidocs-ocr-worker.log');
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
console.error('❌ Failed to queue job:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
389
verify-running.sh
Executable file
389
verify-running.sh
Executable file
|
|
@ -0,0 +1,389 @@
|
|||
#!/bin/bash
|
||||
|
||||
# NaviDocs Runtime Verification
|
||||
# IF.TTT Citation: if://doc/navidocs/verify-running/v1.0
|
||||
# Purpose: Verify all services are actually running and responding
|
||||
# Created: 2025-11-13
|
||||
|
||||
set -e
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Configuration
|
||||
MAX_WAIT=30 # seconds
|
||||
BACKEND_URL="http://localhost:8001"
|
||||
FRONTEND_URL="http://localhost:8080"
|
||||
MEILI_URL="http://localhost:7700"
|
||||
|
||||
# Counters
|
||||
PASS=0
|
||||
FAIL=0
|
||||
TOTAL_TIME=0
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}🔍 NaviDocs Runtime Verification${NC}"
|
||||
echo -e "${CYAN}IF.TTT Citation: if://doc/navidocs/verify-running/v1.0${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Started: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
|
||||
echo "Max wait per check: ${MAX_WAIT}s"
|
||||
echo ""
|
||||
|
||||
# Helper functions
|
||||
check_pass() {
|
||||
echo -e "${GREEN}✅ PASS${NC}: $1"
|
||||
[ -n "$2" ] && echo " Time: ${2}ms"
|
||||
((PASS++))
|
||||
}
|
||||
|
||||
check_fail() {
|
||||
echo -e "${RED}❌ FAIL${NC}: $1"
|
||||
echo -e "${RED} → $2${NC}"
|
||||
((FAIL++))
|
||||
}
|
||||
|
||||
section_header() {
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ $1 ━━━${NC}"
|
||||
}
|
||||
|
||||
# Time an HTTP request
|
||||
time_request() {
|
||||
local url=$1
|
||||
local start=$(date +%s%3N)
|
||||
local response_code=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 "$url" 2>/dev/null || echo "000")
|
||||
local end=$(date +%s%3N)
|
||||
local duration=$((end - start))
|
||||
echo "$response_code:$duration"
|
||||
}
|
||||
|
||||
# Wait for service with retry
|
||||
wait_for_service() {
|
||||
local url=$1
|
||||
local service_name=$2
|
||||
local max_attempts=$3
|
||||
local attempt=1
|
||||
|
||||
while [ $attempt -le $max_attempts ]; do
|
||||
local result=$(time_request "$url")
|
||||
local code=$(echo $result | cut -d: -f1)
|
||||
local time=$(echo $result | cut -d: -f2)
|
||||
|
||||
if [ "$code" = "200" ] || [ "$code" = "304" ]; then
|
||||
check_pass "$service_name responding" "${time}"
|
||||
TOTAL_TIME=$((TOTAL_TIME + time))
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW} Attempt $attempt/$max_attempts: HTTP $code, waiting 2s...${NC}"
|
||||
sleep 2
|
||||
((attempt++))
|
||||
done
|
||||
|
||||
check_fail "$service_name not responding after ${max_attempts} attempts" "Last status: $code"
|
||||
return 1
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 1: Process Verification
|
||||
# ============================================================================
|
||||
section_header "PROCESS VERIFICATION"
|
||||
|
||||
# Backend
|
||||
BACKEND_PID=$(pgrep -f "navidocs.*index.js" 2>/dev/null || echo "")
|
||||
if [ -n "$BACKEND_PID" ]; then
|
||||
check_pass "Backend process running (PID: $BACKEND_PID)"
|
||||
echo " IF.TTT: if://agent/1/findings/backend-pid"
|
||||
else
|
||||
check_fail "Backend process not found" "Expected process: node index.js"
|
||||
fi
|
||||
|
||||
# Frontend
|
||||
FRONTEND_PID=$(pgrep -f "vite.*navidocs" 2>/dev/null || pgrep -f "node.*vite" 2>/dev/null || echo "")
|
||||
if [ -n "$FRONTEND_PID" ]; then
|
||||
check_pass "Frontend process running (PID: $FRONTEND_PID)"
|
||||
echo " IF.TTT: if://agent/2/findings/frontend-pid"
|
||||
else
|
||||
check_fail "Frontend process not found" "Expected process: vite dev server"
|
||||
fi
|
||||
|
||||
# OCR Worker
|
||||
WORKER_PID=$(pgrep -f "ocr-worker.js" 2>/dev/null || echo "")
|
||||
if [ -n "$WORKER_PID" ]; then
|
||||
check_pass "OCR worker running (PID: $WORKER_PID)"
|
||||
else
|
||||
check_fail "OCR worker not found" "Document processing will not work"
|
||||
fi
|
||||
|
||||
# Redis
|
||||
REDIS_PID=$(pgrep redis-server 2>/dev/null || echo "")
|
||||
if [ -n "$REDIS_PID" ]; then
|
||||
check_pass "Redis running (PID: $REDIS_PID)"
|
||||
else
|
||||
check_fail "Redis process not found" "Required for job queue"
|
||||
fi
|
||||
|
||||
# Meilisearch (Docker)
|
||||
MEILI_CONTAINER=$(docker ps --filter "name=boat-manuals-meilisearch" --format "{{.Status}}" 2>/dev/null || echo "")
|
||||
if [[ "$MEILI_CONTAINER" == *"Up"* ]]; then
|
||||
check_pass "Meilisearch container running ($MEILI_CONTAINER)"
|
||||
else
|
||||
check_fail "Meilisearch container not running" "Search will not work"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 2: HTTP Endpoints
|
||||
# ============================================================================
|
||||
section_header "HTTP ENDPOINT VERIFICATION"
|
||||
|
||||
# Backend health check
|
||||
echo "Testing: $BACKEND_URL/health"
|
||||
if wait_for_service "$BACKEND_URL/health" "Backend /health" 5; then
|
||||
# Get health response
|
||||
HEALTH_RESPONSE=$(curl -s "$BACKEND_URL/health" 2>/dev/null)
|
||||
if echo "$HEALTH_RESPONSE" | grep -q '"status":"ok"'; then
|
||||
check_pass "Backend health check returns valid JSON"
|
||||
UPTIME=$(echo "$HEALTH_RESPONSE" | grep -o '"uptime":[0-9.]*' | cut -d: -f2 || echo "unknown")
|
||||
echo " Uptime: ${UPTIME}s"
|
||||
echo " IF.TTT: if://agent/1/findings/backend-health"
|
||||
else
|
||||
check_fail "Backend health check invalid response" "Got: $HEALTH_RESPONSE"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Frontend (main page)
|
||||
echo ""
|
||||
echo "Testing: $FRONTEND_URL/"
|
||||
if wait_for_service "$FRONTEND_URL/" "Frontend main page" 5; then
|
||||
# Check for Vue app div
|
||||
if curl -s "$FRONTEND_URL/" 2>/dev/null | grep -q '<div id="app">'; then
|
||||
check_pass "Frontend returns valid Vue app HTML"
|
||||
echo " IF.TTT: if://agent/2/findings/frontend-html"
|
||||
else
|
||||
check_fail "Frontend HTML missing Vue app mount point" "Expected: <div id=\"app\">"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Meilisearch
|
||||
echo ""
|
||||
echo "Testing: $MEILI_URL/health"
|
||||
if wait_for_service "$MEILI_URL/health" "Meilisearch /health" 3; then
|
||||
MEILI_RESPONSE=$(curl -s "$MEILI_URL/health" 2>/dev/null)
|
||||
if echo "$MEILI_RESPONSE" | grep -q '"status":"available"'; then
|
||||
check_pass "Meilisearch reports available status"
|
||||
echo " IF.TTT: if://agent/1/findings/meilisearch-health"
|
||||
fi
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 3: API Functionality
|
||||
# ============================================================================
|
||||
section_header "API FUNCTIONALITY TESTS"
|
||||
|
||||
# Test documents endpoint
|
||||
echo "Testing: $BACKEND_URL/api/documents"
|
||||
result=$(time_request "$BACKEND_URL/api/documents")
|
||||
code=$(echo $result | cut -d: -f1)
|
||||
time=$(echo $result | cut -d: -f2)
|
||||
|
||||
if [ "$code" = "200" ]; then
|
||||
check_pass "Documents API responding" "${time}"
|
||||
|
||||
# Parse document count
|
||||
DOC_COUNT=$(curl -s "$BACKEND_URL/api/documents" 2>/dev/null | grep -o '"total":[0-9]*' | cut -d: -f2 || echo "unknown")
|
||||
echo " Documents: $DOC_COUNT"
|
||||
echo " IF.TTT: if://agent/1/findings/documents-api"
|
||||
TOTAL_TIME=$((TOTAL_TIME + time))
|
||||
else
|
||||
check_fail "Documents API not responding" "HTTP $code"
|
||||
fi
|
||||
|
||||
# Test search health
|
||||
echo ""
|
||||
echo "Testing: $BACKEND_URL/api/search/health"
|
||||
result=$(time_request "$BACKEND_URL/api/search/health")
|
||||
code=$(echo $result | cut -d: -f1)
|
||||
time=$(echo $result | cut -d: -f2)
|
||||
|
||||
if [ "$code" = "200" ]; then
|
||||
check_pass "Search API responding" "${time}"
|
||||
echo " IF.TTT: if://agent/1/findings/search-api"
|
||||
TOTAL_TIME=$((TOTAL_TIME + time))
|
||||
else
|
||||
check_fail "Search API not responding" "HTTP $code"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 4: Redis Connectivity
|
||||
# ============================================================================
|
||||
section_header "REDIS CONNECTIVITY"
|
||||
|
||||
if command -v redis-cli &> /dev/null; then
|
||||
REDIS_PING=$(timeout 3 redis-cli ping 2>/dev/null || echo "ERROR")
|
||||
if [ "$REDIS_PING" = "PONG" ]; then
|
||||
check_pass "Redis responding to ping"
|
||||
|
||||
# Check queue length
|
||||
QUEUE_LENGTH=$(redis-cli llen "bull:ocr-queue:wait" 2>/dev/null || echo "unknown")
|
||||
echo " OCR queue length: $QUEUE_LENGTH jobs"
|
||||
echo " IF.TTT: if://agent/1/findings/redis-ping"
|
||||
else
|
||||
check_fail "Redis not responding" "Cannot reach Redis server"
|
||||
fi
|
||||
else
|
||||
check_fail "redis-cli not installed" "Cannot verify Redis connectivity"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 5: Database Access
|
||||
# ============================================================================
|
||||
section_header "DATABASE ACCESSIBILITY"
|
||||
|
||||
DB_PATH="/home/setup/navidocs/server/db/navidocs.db"
|
||||
|
||||
if [ -f "$DB_PATH" ]; then
|
||||
check_pass "Database file exists"
|
||||
|
||||
if command -v sqlite3 &> /dev/null; then
|
||||
# Quick query to verify database is not locked
|
||||
DOC_COUNT=$(timeout 3 sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM documents;" 2>/dev/null || echo "ERROR")
|
||||
if [ "$DOC_COUNT" != "ERROR" ]; then
|
||||
check_pass "Database readable ($DOC_COUNT documents)"
|
||||
echo " IF.TTT: if://agent/3/findings/database-query"
|
||||
else
|
||||
check_fail "Database locked or corrupted" "Cannot query documents table"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
check_fail "Database file missing" "Expected: $DB_PATH"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 6: E2E Smoke Test
|
||||
# ============================================================================
|
||||
section_header "END-TO-END SMOKE TEST"
|
||||
|
||||
echo "Attempting quick document creation flow..."
|
||||
|
||||
# Step 1: Create test document via API
|
||||
TEST_DOC_ID=""
|
||||
if [ -f "/home/setup/navidocs/test-manual.pdf" ]; then
|
||||
echo " 1. Uploading test document..."
|
||||
UPLOAD_RESPONSE=$(curl -s -X POST "$BACKEND_URL/api/upload" \
|
||||
-F "file=@/home/setup/navidocs/test-manual.pdf" \
|
||||
-F "title=Verify-Running Test Doc" \
|
||||
-F "documentType=owner-manual" \
|
||||
-F "organizationId=test-org-id" 2>/dev/null || echo "ERROR")
|
||||
|
||||
if echo "$UPLOAD_RESPONSE" | grep -q '"documentId"'; then
|
||||
TEST_DOC_ID=$(echo "$UPLOAD_RESPONSE" | grep -o '"documentId":"[^"]*"' | cut -d'"' -f4)
|
||||
check_pass "Document upload successful (ID: $TEST_DOC_ID)"
|
||||
echo " IF.TTT: if://agent/5/findings/upload-success"
|
||||
|
||||
# Step 2: Wait for OCR processing (max 10s)
|
||||
echo " 2. Waiting for OCR processing (max 10s)..."
|
||||
sleep 3
|
||||
|
||||
for i in {1..7}; do
|
||||
DOC_STATUS=$(curl -s "$BACKEND_URL/api/documents/$TEST_DOC_ID" 2>/dev/null | grep -o '"status":"[^"]*"' | cut -d'"' -f4 || echo "unknown")
|
||||
if [ "$DOC_STATUS" = "indexed" ]; then
|
||||
check_pass "OCR processing completed (status: indexed)"
|
||||
echo " IF.TTT: if://agent/5/findings/ocr-complete"
|
||||
break
|
||||
else
|
||||
echo " Status: $DOC_STATUS, waiting..."
|
||||
sleep 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Step 3: Verify document is retrievable
|
||||
echo " 3. Verifying document retrieval..."
|
||||
result=$(time_request "$BACKEND_URL/api/documents/$TEST_DOC_ID")
|
||||
code=$(echo $result | cut -d: -f1)
|
||||
if [ "$code" = "200" ]; then
|
||||
check_pass "Document retrieval working"
|
||||
else
|
||||
check_fail "Document retrieval failed" "HTTP $code"
|
||||
fi
|
||||
else
|
||||
check_fail "Document upload failed" "Response: $UPLOAD_RESPONSE"
|
||||
fi
|
||||
else
|
||||
echo " Skipping: test-manual.pdf not found"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHECK 7: Log File Activity
|
||||
# ============================================================================
|
||||
section_header "LOG FILE MONITORING"
|
||||
|
||||
check_log() {
|
||||
local log_file=$1
|
||||
local service_name=$2
|
||||
|
||||
if [ -f "$log_file" ]; then
|
||||
local size=$(ls -lh "$log_file" | awk '{print $5}')
|
||||
local last_modified=$(stat -c %Y "$log_file")
|
||||
local now=$(date +%s)
|
||||
local age=$((now - last_modified))
|
||||
|
||||
if [ $age -lt 60 ]; then
|
||||
check_pass "$service_name log active ($size, ${age}s old)"
|
||||
else
|
||||
check_fail "$service_name log stale" "Last modified ${age}s ago (may be frozen)"
|
||||
fi
|
||||
else
|
||||
check_fail "$service_name log missing" "Expected: $log_file"
|
||||
fi
|
||||
}
|
||||
|
||||
check_log "/tmp/navidocs-backend.log" "Backend"
|
||||
check_log "/tmp/navidocs-frontend.log" "Frontend"
|
||||
check_log "/tmp/navidocs-ocr-worker.log" "OCR Worker"
|
||||
|
||||
# ============================================================================
|
||||
# SUMMARY
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}📊 RUNTIME VERIFICATION SUMMARY${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo -e "${GREEN}✅ PASSED: $PASS${NC}"
|
||||
echo -e "${RED}❌ FAILED: $FAIL${NC}"
|
||||
echo ""
|
||||
echo "Total API response time: ${TOTAL_TIME}ms"
|
||||
echo "IF.TTT: if://test-run/navidocs/verify-running/$(date +%Y%m%d-%H%M%S)"
|
||||
echo ""
|
||||
|
||||
# Overall result
|
||||
if [ $FAIL -eq 0 ]; then
|
||||
echo -e "${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${GREEN}✅ ALL SYSTEMS OPERATIONAL${NC}"
|
||||
echo -e "${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "NaviDocs is ready for demo/presentation!"
|
||||
echo ""
|
||||
echo "Access URLs:"
|
||||
echo " Frontend: $FRONTEND_URL"
|
||||
echo " Backend: $BACKEND_URL"
|
||||
echo " API Docs: $BACKEND_URL/health"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${RED}❌ SYSTEM NOT READY${NC}"
|
||||
echo -e "${RED}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "Critical failures detected. Review errors above."
|
||||
echo "Check logs:"
|
||||
echo " tail -100 /tmp/navidocs-backend.log"
|
||||
echo " tail -100 /tmp/navidocs-frontend.log"
|
||||
echo " tail -100 /tmp/navidocs-ocr-worker.log"
|
||||
exit 1
|
||||
fi
|
||||
338
version-check.sh
Executable file
338
version-check.sh
Executable file
|
|
@ -0,0 +1,338 @@
|
|||
#!/bin/bash
|
||||
|
||||
# NaviDocs Version Verification
|
||||
# IF.TTT Citation: if://doc/navidocs/version-check/v1.0
|
||||
# Purpose: Verify exactly which version is running
|
||||
# Created: 2025-11-13
|
||||
|
||||
set -e
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}🔍 NaviDocs Version Verification${NC}"
|
||||
echo -e "${CYAN}IF.TTT Citation: if://doc/navidocs/version-check/v1.0${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Timestamp: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
|
||||
echo ""
|
||||
|
||||
cd /home/setup/navidocs || exit 1
|
||||
|
||||
# ============================================================================
|
||||
# GIT VERSION
|
||||
# ============================================================================
|
||||
echo -e "${BLUE}━━━ GIT REPOSITORY VERSION ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
if [ -d ".git" ]; then
|
||||
GIT_COMMIT=$(git rev-parse HEAD 2>/dev/null)
|
||||
GIT_COMMIT_SHORT=$(git rev-parse --short HEAD 2>/dev/null)
|
||||
GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
GIT_TAG=$(git describe --tags --exact-match 2>/dev/null || echo "No tag")
|
||||
GIT_AUTHOR=$(git log -1 --format='%an <%ae>' 2>/dev/null)
|
||||
GIT_DATE=$(git log -1 --format='%ai' 2>/dev/null)
|
||||
GIT_MESSAGE=$(git log -1 --format='%s' 2>/dev/null)
|
||||
|
||||
echo -e "${GREEN}✅ Git repository detected${NC}"
|
||||
echo ""
|
||||
echo " Commit: $GIT_COMMIT_SHORT ($GIT_COMMIT)"
|
||||
echo " Branch: $GIT_BRANCH"
|
||||
echo " Tag: $GIT_TAG"
|
||||
echo " Author: $GIT_AUTHOR"
|
||||
echo " Date: $GIT_DATE"
|
||||
echo " Message: $GIT_MESSAGE"
|
||||
echo ""
|
||||
echo " IF.TTT: if://git/navidocs/commit/$GIT_COMMIT"
|
||||
|
||||
# Check for uncommitted changes
|
||||
if git diff --quiet && git diff --cached --quiet; then
|
||||
echo -e " ${GREEN}✅ Working tree clean${NC}"
|
||||
else
|
||||
echo -e " ${YELLOW}⚠️ Uncommitted changes detected${NC}"
|
||||
echo ""
|
||||
echo " Modified files:"
|
||||
git status --short | sed 's/^/ /'
|
||||
fi
|
||||
|
||||
# Show recent commits
|
||||
echo ""
|
||||
echo -e "${BLUE}Recent commits:${NC}"
|
||||
git log --oneline -5 | sed 's/^/ /'
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ Not a git repository${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# NODE.JS VERSION
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ NODE.JS ENVIRONMENT ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
NODE_VERSION=$(node --version 2>/dev/null || echo "NOT_INSTALLED")
|
||||
NPM_VERSION=$(npm --version 2>/dev/null || echo "NOT_INSTALLED")
|
||||
NODE_PATH=$(which node 2>/dev/null || echo "NOT_FOUND")
|
||||
|
||||
echo " Node.js: $NODE_VERSION"
|
||||
echo " npm: $NPM_VERSION"
|
||||
echo " Path: $NODE_PATH"
|
||||
echo ""
|
||||
|
||||
REQUIRED_NODE="v20.19.5"
|
||||
if [ "$NODE_VERSION" = "$REQUIRED_NODE" ]; then
|
||||
echo -e " ${GREEN}✅ Node.js version matches requirement ($REQUIRED_NODE)${NC}"
|
||||
elif [[ "$NODE_VERSION" == v20.* ]]; then
|
||||
echo -e " ${YELLOW}⚠️ Node.js minor version mismatch (expected $REQUIRED_NODE, got $NODE_VERSION)${NC}"
|
||||
else
|
||||
echo -e " ${RED}❌ Node.js version incompatible (expected $REQUIRED_NODE, got $NODE_VERSION)${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# PACKAGE.JSON VERSIONS
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ PACKAGE.JSON VERSIONS ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Server package.json
|
||||
if [ -f "server/package.json" ]; then
|
||||
SERVER_VERSION=$(grep '"version"' server/package.json | head -1 | cut -d'"' -f4)
|
||||
echo -e "${GREEN}Server:${NC} v$SERVER_VERSION"
|
||||
echo " File: /home/setup/navidocs/server/package.json"
|
||||
|
||||
# Key dependencies
|
||||
echo ""
|
||||
echo " Key Dependencies:"
|
||||
grep -E '"express"|"sqlite3"|"bullmq"|"meilisearch"|"ioredis"' server/package.json | sed 's/^/ /'
|
||||
else
|
||||
echo -e "${RED}❌ server/package.json not found${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Client package.json
|
||||
if [ -f "client/package.json" ]; then
|
||||
CLIENT_VERSION=$(grep '"version"' client/package.json | head -1 | cut -d'"' -f4)
|
||||
echo -e "${GREEN}Client:${NC} v$CLIENT_VERSION"
|
||||
echo " File: /home/setup/navidocs/client/package.json"
|
||||
|
||||
# Key dependencies
|
||||
echo ""
|
||||
echo " Key Dependencies:"
|
||||
grep -E '"vue"|"vite"|"pinia"|"vue-router"' client/package.json | sed 's/^/ /'
|
||||
else
|
||||
echo -e "${RED}❌ client/package.json not found${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# DATABASE SCHEMA VERSION
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ DATABASE SCHEMA ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
DB_PATH="/home/setup/navidocs/server/db/navidocs.db"
|
||||
|
||||
if [ -f "$DB_PATH" ]; then
|
||||
DB_SIZE=$(ls -lh "$DB_PATH" | awk '{print $5}')
|
||||
DB_MODIFIED=$(stat -c %y "$DB_PATH" | cut -d' ' -f1,2 | cut -d'.' -f1)
|
||||
|
||||
echo " Path: $DB_PATH"
|
||||
echo " Size: $DB_SIZE"
|
||||
echo " Modified: $DB_MODIFIED"
|
||||
echo ""
|
||||
|
||||
if command -v sqlite3 &> /dev/null; then
|
||||
# Table count
|
||||
TABLE_COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM sqlite_master WHERE type='table';" 2>/dev/null || echo "ERROR")
|
||||
echo " Tables: $TABLE_COUNT"
|
||||
|
||||
# Schema version (if exists)
|
||||
SCHEMA_VERSION=$(sqlite3 "$DB_PATH" "SELECT value FROM system_settings WHERE key='schema_version';" 2>/dev/null || echo "Not set")
|
||||
echo " Schema: $SCHEMA_VERSION"
|
||||
|
||||
# Recent schema changes
|
||||
echo ""
|
||||
echo " Table List:"
|
||||
sqlite3 "$DB_PATH" "SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;" 2>/dev/null | sed 's/^/ /' || echo " Error reading tables"
|
||||
else
|
||||
echo -e " ${YELLOW}⚠️ sqlite3 not installed, cannot inspect schema${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ Database file not found: $DB_PATH${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# MEILISEARCH VERSION
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ MEILISEARCH VERSION ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
MEILI_VERSION=$(docker exec boat-manuals-meilisearch meilisearch --version 2>/dev/null | head -1 || echo "ERROR")
|
||||
|
||||
if [ "$MEILI_VERSION" != "ERROR" ]; then
|
||||
echo " Version: $MEILI_VERSION"
|
||||
echo " Container: boat-manuals-meilisearch"
|
||||
echo ""
|
||||
|
||||
# Meilisearch stats
|
||||
MEILI_HEALTH=$(curl -s http://localhost:7700/health 2>/dev/null || echo "ERROR")
|
||||
if [[ "$MEILI_HEALTH" == *"available"* ]]; then
|
||||
echo -e " ${GREEN}✅ Meilisearch responding${NC}"
|
||||
|
||||
# Get version via API
|
||||
MEILI_API_VERSION=$(curl -s http://localhost:7700/version 2>/dev/null | grep -o '"pkgVersion":"[^"]*"' | cut -d'"' -f4 || echo "unknown")
|
||||
echo " API Version: $MEILI_API_VERSION"
|
||||
echo " Expected: v1.6.x"
|
||||
|
||||
if [[ "$MEILI_API_VERSION" == 1.6.* ]]; then
|
||||
echo -e " ${GREEN}✅ Version compatible${NC}"
|
||||
else
|
||||
echo -e " ${YELLOW}⚠️ Version may be incompatible${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e " ${RED}❌ Meilisearch not responding${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ Meilisearch container not found or not running${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# REDIS VERSION
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ REDIS VERSION ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
if command -v redis-cli &> /dev/null; then
|
||||
REDIS_VERSION=$(redis-cli --version 2>/dev/null | cut -d' ' -f2 || echo "unknown")
|
||||
echo " CLI Version: $REDIS_VERSION"
|
||||
|
||||
REDIS_PING=$(redis-cli ping 2>/dev/null || echo "ERROR")
|
||||
if [ "$REDIS_PING" = "PONG" ]; then
|
||||
echo -e " ${GREEN}✅ Redis responding${NC}"
|
||||
|
||||
# Get server version
|
||||
REDIS_SERVER_VERSION=$(redis-cli info server 2>/dev/null | grep "redis_version:" | cut -d: -f2 | tr -d '\r' || echo "unknown")
|
||||
echo " Server Version: $REDIS_SERVER_VERSION"
|
||||
else
|
||||
echo -e " ${RED}❌ Redis not responding${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ redis-cli not installed${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# RUNNING SERVICES VERSION CHECK
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ RUNNING SERVICES ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Backend process
|
||||
BACKEND_PID=$(pgrep -f "navidocs.*index.js" 2>/dev/null || echo "")
|
||||
if [ -n "$BACKEND_PID" ]; then
|
||||
BACKEND_START=$(ps -p $BACKEND_PID -o lstart= 2>/dev/null || echo "unknown")
|
||||
BACKEND_UPTIME=$(ps -p $BACKEND_PID -o etime= 2>/dev/null | xargs || echo "unknown")
|
||||
|
||||
echo -e "${GREEN}✅ Backend API${NC}"
|
||||
echo " PID: $BACKEND_PID"
|
||||
echo " Started: $BACKEND_START"
|
||||
echo " Uptime: $BACKEND_UPTIME"
|
||||
echo " Command: $(ps -p $BACKEND_PID -o cmd= | head -c 80)"
|
||||
echo ""
|
||||
|
||||
# Check backend version via API
|
||||
BACKEND_HEALTH=$(curl -s http://localhost:8001/health 2>/dev/null || echo "ERROR")
|
||||
if [[ "$BACKEND_HEALTH" == *"ok"* ]]; then
|
||||
API_UPTIME=$(echo "$BACKEND_HEALTH" | grep -o '"uptime":[0-9.]*' | cut -d: -f2 || echo "unknown")
|
||||
echo " API Health: OK (uptime: ${API_UPTIME}s)"
|
||||
else
|
||||
echo -e " ${RED}API Health: ERROR${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}❌ Backend API not running${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Frontend process
|
||||
FRONTEND_PID=$(pgrep -f "vite.*navidocs" 2>/dev/null || pgrep -f "node.*vite" 2>/dev/null || echo "")
|
||||
if [ -n "$FRONTEND_PID" ]; then
|
||||
FRONTEND_START=$(ps -p $FRONTEND_PID -o lstart= 2>/dev/null || echo "unknown")
|
||||
FRONTEND_UPTIME=$(ps -p $FRONTEND_PID -o etime= 2>/dev/null | xargs || echo "unknown")
|
||||
|
||||
echo -e "${GREEN}✅ Frontend (Vite)${NC}"
|
||||
echo " PID: $FRONTEND_PID"
|
||||
echo " Started: $FRONTEND_START"
|
||||
echo " Uptime: $FRONTEND_UPTIME"
|
||||
echo ""
|
||||
|
||||
# Check which port Vite is using
|
||||
VITE_PORT=$(lsof -Pan -p $FRONTEND_PID -iTCP -sTCP:LISTEN 2>/dev/null | grep -o ':[0-9]*' | head -1 | tr -d ':' || echo "unknown")
|
||||
echo " Listening: http://localhost:$VITE_PORT"
|
||||
else
|
||||
echo -e "${RED}❌ Frontend (Vite) not running${NC}"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# BUILD ARTIFACTS
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo -e "${BLUE}━━━ BUILD ARTIFACTS ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Check for node_modules
|
||||
SERVER_MODULES=$([ -d "server/node_modules" ] && echo "✅ Installed" || echo "❌ Missing")
|
||||
CLIENT_MODULES=$([ -d "client/node_modules" ] && echo "✅ Installed" || echo "❌ Missing")
|
||||
|
||||
echo " Server node_modules: $SERVER_MODULES"
|
||||
if [ -d "server/node_modules" ]; then
|
||||
MODULE_COUNT=$(ls -1 server/node_modules 2>/dev/null | wc -l)
|
||||
MODULE_SIZE=$(du -sh server/node_modules 2>/dev/null | cut -f1)
|
||||
echo " Packages: $MODULE_COUNT"
|
||||
echo " Size: $MODULE_SIZE"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo " Client node_modules: $CLIENT_MODULES"
|
||||
if [ -d "client/node_modules" ]; then
|
||||
MODULE_COUNT=$(ls -1 client/node_modules 2>/dev/null | wc -l)
|
||||
MODULE_SIZE=$(du -sh client/node_modules 2>/dev/null | cut -f1)
|
||||
echo " Packages: $MODULE_COUNT"
|
||||
echo " Size: $MODULE_SIZE"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SUMMARY
|
||||
# ============================================================================
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo -e "${CYAN}📊 VERSION CHECK SUMMARY${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Create version fingerprint
|
||||
FINGERPRINT="NaviDocs"
|
||||
[ -n "$GIT_COMMIT_SHORT" ] && FINGERPRINT="$FINGERPRINT@$GIT_COMMIT_SHORT"
|
||||
[ -n "$SERVER_VERSION" ] && FINGERPRINT="$FINGERPRINT (server:$SERVER_VERSION"
|
||||
[ -n "$CLIENT_VERSION" ] && FINGERPRINT="$FINGERPRINT, client:$CLIENT_VERSION)"
|
||||
|
||||
echo "Version Fingerprint: $FINGERPRINT"
|
||||
echo "Node.js: $NODE_VERSION"
|
||||
echo "Database: $DB_SIZE ($TABLE_COUNT tables)"
|
||||
echo "Meilisearch: $MEILI_API_VERSION"
|
||||
echo "Redis: $REDIS_SERVER_VERSION"
|
||||
echo ""
|
||||
echo "IF.TTT: if://version/navidocs/fingerprint/$(echo $FINGERPRINT | md5sum | cut -d' ' -f1)"
|
||||
echo ""
|
||||
echo "Report generated: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
Loading…
Add table
Reference in a new issue