Codex now ingests structured governance data from five major US port authorities — Oakland, Long Beach, Los Angeles, JAXPORT, and the Port Authority of New York and New Jersey. Board meeting agendas, minutes, resolutions, and attachments are collected automatically and stored with full APRS envelope compliance. Each port uses a purpose-built adapter that understands its specific publishing format (Legistar, Granicus, or custom CMS), so records arrive normalized and ready for cross-port queries. See the data catalog for the full list of port authority tables.
Codex now tracks port tariff documents across revisions and automatically detects rate changes between versions. When a new tariff is published, the platform extracts individual line items and compares them against the previous version — flagging new, modified, and removed items with full supersession linkage. JAXPORT tariffs are supported at launch, with additional ports to follow. This gives you a structured, auditable view of port pricing changes over time without manually comparing PDF documents.
Port authority agenda items are now automatically classified using AI into categories like tariff changes, lease actions, capital projects, contract awards, and procurement — along with extracted counterparty names and terminal locations. This makes it easier to filter governance records by action type and quickly find the items that matter to your workflow.
Port authority records now use the claim/fact layer to separate what was proposed (staff recommendations, agenda items) from what was decided (board votes, adoption outcomes). This distinction makes it straightforward to query for approved actions versus pending proposals, and feeds downstream compliance and audit workflows.
The live support chat previously available in Layer is now available across Locus as well. When you’re signed in, the chat widget identifies you automatically for faster support. Anonymous mode is also supported for pre-login questions.
All Codex data loaders now use a safer batch upsert strategy that logs the first distinct error per batch and provides a summary of attempted, succeeded, and failed records. Previously, a single malformed record could silently stall a batch. This means data coverage stays current even when upstream sources occasionally deliver incomplete records.
Removed 97,000 orphan NFIP flood claims that had no date information and were skewing cell-level risk scores. A new validation constraint prevents dateless records from being ingested in the future, so flood exposure scores are now based entirely on properly dated claims.
Resolved several API authentication and input validation issues across the platform. Rate limiting is now more resilient to transient errors, filter parameters are sanitized against injection, and internal diagnostic endpoints require proper authentication. These changes strengthen the platform’s security posture — no action is required on your part.
⌘I
Assistant
Responses are generated using AI and may contain mistakes.