Observability for CRM–EHR Integrations: Monitoring, Auditing and Traceability Best Practices
A practical guide to observability, audit trails, tracing, SLA alerts, and incident response for Veeva–Epic integrations.
When teams connect life-sciences CRM and hospital EHR systems, the hardest problems are often not data mapping or API authentication. The real challenge is proving, in production, what happened to every record, every PHI access event, and every downstream update when something goes wrong. That is why observability has become a core requirement for v eeva epic integrations: it is the difference between a system that merely moves data and a system the business can trust under clinical, regulatory, and operational scrutiny. For an overview of the underlying integration landscape, see the technical background in our Veeva CRM and Epic EHR integration guide and the broader principles in MLOps for clinical decision support.
This guide focuses on the operational layer: how to instrument end-to-end tracing, build reliable audit trails for PHI access, set meaningful SLA alerts, and run a disciplined incident response process when data inconsistencies appear. If your architecture includes FHIR APIs, event buses, middleware, or data warehouses, the observability patterns here apply just as well. The goal is simple: make your integration measurable, explainable, and supportable at production scale.
Why observability is non-negotiable in CRM–EHR integrations
Integration failures are usually silent before they are visible
In a typical enterprise integration, a missing row or delayed message may be annoying. In healthcare and life sciences, those same failures can affect care coordination, field operations, compliance reporting, and patient support workflows. A failed sync between Epic and Veeva can leave a healthcare professional record incomplete, delay a follow-up task, or create a discrepancy that becomes expensive to investigate later. Many teams discover issues only when a user complains, which is already too late for an operationally sensitive pipeline.
Observability gives you earlier detection and stronger root-cause analysis. With structured logs, metrics, and traces, you can see whether a delay came from source-system latency, a transformation error, a downstream API timeout, or a queue backlog. That matters because the right fix for each cause is different, and guessing in regulated workflows creates risk. A well-designed monitoring strategy also reduces the burden on engineering teams by shortening incident triage and clarifying ownership.
Healthcare integrations demand stronger proof than ordinary SaaS workflows
Healthcare data carries special obligations: HIPAA, minimum necessary access, internal policy controls, and often contractual restrictions from providers and vendors. The integration must not only work; it must show who accessed what, when, why, and whether the action was authorized. This is why tracing alone is insufficient unless it is paired with immutable audit records, correlation IDs, and role-based access rules. Teams often borrow ideas from secure workflow systems and from practices used in signed acknowledgement pipelines where receipt, acknowledgment, and downstream delivery must all be provable.
There is also a reputational angle. If a physician, privacy officer, or compliance reviewer asks how a patient-related record moved from Epic to Veeva, the answer cannot be “the integration usually works.” The answer must be evidence-based and reproducible. That is the core promise of observability: operational confidence backed by machine-readable history.
Observability improves cost, not just reliability
Some teams treat observability as overhead, but in practice it lowers total cost of ownership. Better dashboards expose recurring bottlenecks, bad retry policies, or over-chatty sync jobs that inflate cloud spend. Instead of scaling brute force, teams can tune batch sizes, reduce noisy retries, and isolate the integrations that actually require real-time behavior. That same “measure before you optimize” mindset appears in benchmark-driven KPI setting, where teams use clear baselines rather than assumptions.
Pro tip: If you cannot answer “what percentage of Epic-to-Veeva transactions completed successfully within 5 minutes?” you do not yet have integration observability—you have logs.
Design the observability model before building the pipeline
Start with the business questions, not the tools
The best observability stack is not the one with the most dashboards; it is the one that answers the right questions quickly. For a CRM–EHR integration, those questions usually include: Did every intended record move? Was the transformation valid? Did the receiving system acknowledge the message? Was PHI accessed by an authorized workflow? Did the integration meet latency and freshness targets for downstream consumers? Define these questions up front and map them to metrics, traces, logs, and audit events.
That approach resembles how teams design trust systems in other operational environments. In data-rich merchandising systems, for example, the best display strategies begin with the desired shopper behavior. Likewise, integration observability should begin with the desired operational outcome: reliable, compliant movement of clinical and relationship data with minimal ambiguity.
Split signals into three layers
Use three layers of telemetry. The first is technical observability: service health, queue depth, API error rates, transformation latency, and retry counts. The second is operational observability: data freshness, record reconciliation, source-to-target completeness, and SLA compliance. The third is compliance observability: PHI access logs, consent-related actions, access denials, and immutable evidence for audit review. Each layer needs its own owners, dashboards, and alerting thresholds.
This separation prevents one common anti-pattern: assuming that a green uptime chart means the integration is healthy. A pipeline may be fully available while silently dropping fields, duplicating records, or sending stale attributes. For regulated healthcare workflows, that is a hidden failure, not a success.
Use correlation IDs and business keys everywhere
Every event should carry a correlation ID that survives translation through middleware, message queues, and destination APIs. When possible, attach stable business keys such as patient ID, encounter ID, HCP ID, account ID, and event type so analysts can reconstruct the transaction path later. If a single Epic event generates multiple Veeva actions, the relationship should still be traceable across all hops. This is especially important for tracing FHIR workflows where a single patient update can fan out into multiple resource changes.
Do not rely only on vendor-generated request IDs. Those are useful, but they often break at boundaries or disappear in asynchronous flows. Instead, create your own integration-wide identifiers and propagate them consistently through headers, payload metadata, and audit records.
How to implement end-to-end tracing across Epic, middleware, and Veeva
Instrument the source, the broker, and the destination
True end-to-end tracing means instrumenting all three layers: the source system that emits the event, the middleware or broker that transforms and routes it, and the destination system that accepts it. In a Veeva–Epic flow, this might include an Epic FHIR subscription or interface event, an integration engine like MuleSoft or Mirth, and a Veeva API endpoint or batch import process. Each hop should log the same correlation ID, timestamp, status, and transformation summary. When possible, include a schema version so you can diagnose whether a break came from contract drift rather than data quality.
The same discipline used in automated feature extraction pipelines applies here: if you cannot trace the data through the pipeline, you cannot trust the result. The key difference is that healthcare adds stricter privacy and compliance controls, so tracing must be selective, redacted, and access-controlled.
Log meaningful checkpoints, not every byte
Over-logging creates noise and increases risk, especially when PHI may appear in payloads. Instead of storing raw payloads indiscriminately, log checkpoints that describe the payload and its outcome: source object type, record count, validation result, routing decision, destination response code, and reconciliation status. If a raw payload must be retained for debugging, store it securely with tight retention, encryption, and role-based access control. Redaction should be automatic, not a manual afterthought.
For example, a patient update should generate a trace like this: source event received, PII/PHI scrubbed, consent check passed, field map applied, destination record updated, acknowledgment received, reconciliation completed. That trace is enough for most operational and audit questions without exposing unnecessary content. It also supports faster root-cause analysis because the exact stage of failure is visible.
Link traces to reconciliation outcomes
A trace is only half the story unless it connects to whether the business record ultimately matched expectations. Your observability model should pair each transaction trace with a reconciliation result: accepted, rejected, retried, partially applied, or manually corrected. That lets operators distinguish transport success from data correctness. In practice, this is the only way to detect “successful failure,” where the API returns 200 OK but the record is still incomplete or misclassified.
Use nightly or near-real-time reconciliation jobs to compare source counts, key fields, and delta sets. Where practical, reconcile at the business-event level instead of just row counts. A count match can still hide field-level divergence, and field-level divergence is often what matters most in healthcare operations.
Audit trails for PHI access and compliance evidence
Audit the human and machine actors separately
Compliance teams need to know not only that data moved, but who or what initiated the movement. Human user actions, service-account activity, scheduled jobs, and admin overrides should each produce distinct audit records. This separation matters because service accounts are often over-permissioned and overlooked during reviews. For a Veeva Epic integration, record the triggering user, system identity, purpose, data scope, and whether PHI was present or suppressed.
Think of audit design as a chain of custody. If a support rep triggers a patient-support workflow, the audit log should show the original user intent, the approved workflow, the API calls made, and the response from each system. That kind of evidence mirrors the rigor seen in vendor diligence playbooks, where proof of process is as important as the process itself.
Capture minimum necessary evidence with maximum context
An audit trail should be complete enough for compliance review, but not so verbose that it leaks sensitive data. At minimum, capture the event timestamp, identity, source system, destination system, object type, action performed, access outcome, policy decision, and trace ID. If the access is denied, log the reason, such as missing consent, insufficient role, or failed jurisdiction policy. If a data export occurs, include the destination, file or API method, and retention policy applied.
Many organizations also maintain a separate immutable store for audit events. That store should be write-once or otherwise tamper-evident, with retention aligned to policy and legal requirements. Audit data should be queryable by privacy, security, operations, and compliance teams, but the access to audit logs themselves should be tightly controlled and monitored.
Build audit reports for common questions before the audit happens
When an auditor asks how many PHI-bearing records were accessed by a given service account last quarter, the answer should come from a report template, not a custom firefight. Build canned queries for common scenarios: patient lookup, consent-based transfer, role-based admin override, outbound CRM update, and reconciliation exception. Include drill-down to trace IDs and workflow names. The more your audit layer behaves like a product, the less painful each review will be.
If you want a useful comparison, consider how trustworthy profile design works: the strongest profiles present clear evidence, not vague promises. Your audit trail should do the same for regulated data movement.
SLA monitoring: defining the right metrics and alert thresholds
Monitor latency, freshness, completeness, and error budget
SLAs for CRM–EHR integration should reflect business reality, not generic infrastructure health. Four metrics matter most: end-to-end latency, data freshness, completeness, and failure rate. Latency measures how long it takes a record to travel from source to destination. Freshness measures how current the downstream dataset is relative to source activity. Completeness measures whether all required records and fields arrived. Error rate measures the share of events that failed validation, routing, or delivery.
These metrics should be tracked per workflow, not only globally. A medication-related flow may need near-real-time delivery, while an HCP enrichment job can tolerate longer delay. Without workflow-specific targets, your alerts will either be too noisy or too weak. That is why teams often build SLOs and alerting tiers around business impact rather than engineering convenience.
Use multi-stage alerts, not one giant red alarm
Good sla alerts distinguish between early warning and actual breach. For example, a warning might fire when queue depth exceeds an expected threshold for 10 minutes, while a critical alert might fire when the 95th percentile latency exceeds the SLA for 30 minutes or reconciliation completeness drops below 99.5%. This gives operators time to correct a growing issue before it becomes a client-visible failure. It also helps prevent alert fatigue, which is one of the fastest ways to make observability useless.
Where possible, alert on symptoms and causes together. A latency spike accompanied by a destination timeout has a different remediation path than a latency spike caused by a source-system backlog. Alert routing should include the owning team and the likely failure domain, such as source, transport, transformation, destination, or compliance policy.
Track SLOs in business terms
Executives and operations leaders usually care more about business impact than technical thresholds. Report metrics such as percentage of patient updates delivered within target time, percentage of failed syncs auto-recovered, and number of PHI-related exceptions requiring manual review. Translate technical telemetry into business language on executive dashboards. That keeps stakeholders aligned and prevents “monitoring theater,” where charts look impressive but do not help decisions.
For teams building reporting surfaces, the lesson from sensor-to-dashboard systems is directly relevant: the dashboard should reflect the operational truth, not just the available data. The same principle applies to integration observability.
Data lineage and reconciliation: proving where every field came from
Capture lineage at the field level when it matters
In healthcare integrations, record-level lineage is sometimes too coarse. If one patient attribute is sourced from Epic demographics while another comes from Veeva case management, you need field-level lineage to explain the final state. That means capturing the origin system, transformation rule, mapping version, and timestamp for each critical field. This is particularly important when values are derived, normalized, masked, or merged across sources.
Data lineage also helps resolve disputes. If two teams disagree about whether the latest address is correct, lineage shows which system was authoritative at the time, which transformation applied, and whether a manual override occurred. It turns an argument into an evidence-based investigation.
Use lineage graphs to spot unintended coupling
A lineage graph can reveal where your integration is more tightly coupled than intended. For example, a workflow originally meant to synchronize HCP records might also be feeding downstream segmentation logic, reporting jobs, and workflow triggers. If one upstream field changes, the blast radius may be larger than the engineering team expects. Visual lineage helps operations and architecture teams understand those dependencies before they become incidents.
This is similar to the way category prioritization systems expose hidden demand patterns. When you see the dependency map clearly, you can make better decisions about what to protect, monitor, and decouple.
Reconciliation should be continuous, not just a batch-end check
Daily reconciliation is better than nothing, but continuous reconciliation catches issues faster. Compare event counts, field hashes, and key business attributes between source and destination at intervals that match your operational criticality. If the integration supports near-real-time behavior, then reconciliation should also be near-real-time for critical workflows. When discrepancies arise, queue them into an exception workflow with severity, owner, and expected resolution time.
For teams that have previously relied on manual QA, the transition to continuous reconciliation can feel aggressive. But once you experience the reduction in hidden defects and the speed of root-cause analysis, the value becomes obvious. Hidden drift is one of the most expensive failure modes in enterprise integration.
Incident response for data inconsistencies and failed syncs
Pre-write the runbooks before production incidents happen
When a critical integration breaks, nobody wants to improvise from memory. Build incident runbooks that cover the most likely failure modes: API authentication failure, schema mismatch, queue backlog, partial update, duplicate ingestion, replay storm, and data drift. Each runbook should include detection signals, first checks, rollback options, escalation contacts, and customer communication guidance. The best runbooks are short enough to use under pressure but detailed enough to prevent guessing.
Borrow the discipline from supply-chain contingency planning: define what to do when the normal route fails, not just when things go well. In integrations, the same principle saves hours during incidents.
Classify incidents by data impact, not just by uptime
A short API outage may be less important than a subtle field-mapping error that corrupts patient-support records for six hours. Therefore, incident severity should include data integrity, compliance exposure, customer impact, and recoverability. A system that is down but preserves data can sometimes be recovered cleanly; a system that silently mutates records may require reprocessing and manual correction. This is why observability must extend beyond uptime into data correctness.
Create a severity matrix that distinguishes between no data loss, partial data loss, PHI exposure, downstream consumer impact, and regulatory reporting risk. Tie each severity level to response times, approvers, and rollback authority. That makes incident handling predictable and defensible.
Make post-incident review a structured learning loop
After an incident, the goal is not blame; it is system improvement. Require postmortems to include timeline, root cause, detection gap, alerting gap, remediation, and prevention actions. Add concrete tasks such as schema contract tests, stronger idempotency keys, better reconciliation thresholds, or additional audit fields. Track those actions until they are completed, not just until the document is filed away.
The operational maturity you want is the same kind of resilience seen in continuity planning playbooks and volatile-beat response frameworks: fast detection, clear ownership, and disciplined follow-through. In regulated integration work, that mindset is a competitive advantage.
Practical dashboard design for engineers, compliance, and business stakeholders
Build role-based views, not one universal dashboard
Engineering teams need technical signals: error rate, retries, CPU, queue depth, schema validation failures, and integration latency. Compliance teams need audit views: PHI access, denied access, privileged actions, and retention status. Business stakeholders need outcome views: delivery completeness, delayed records, and exception volume by workflow. If you try to make one dashboard serve all three audiences, it will likely serve none of them well.
The smartest pattern is a layered dashboard model. The top layer gives executives a traffic-light summary of workflow health. The middle layer supports operations with reconciliation and SLA data. The bottom layer supports engineers and analysts with traces, logs, and drill-downs to individual records. This structure reduces noise while preserving detail where it is needed.
Show trends, not only current state
A single green indicator can hide chronic degradation. Include trend lines for latency, error rate, exception volume, and backlog growth over 7, 30, and 90 days. That lets teams see whether a “healthy” integration is gradually becoming unstable. Trend analysis is also useful for capacity planning, because recurring spikes often reveal batch schedules or upstream release patterns.
Where appropriate, annotate dashboards with deployment events, partner release windows, maintenance periods, or policy changes. That context makes root-cause analysis much faster and is often the difference between a vague guess and a precise explanation. Integrations rarely fail in isolation; they fail in an environment with change around them.
Expose drill-downs from metrics to traces to records
Every high-level metric should link to evidence. If a dashboard shows a spike in failed updates, an operator should be able to click through to the affected trace IDs, then to the exact records, then to the transformation or policy decision that caused the issue. That chain reduces the time from alert to diagnosis and keeps teams from manually querying multiple systems. It also improves accountability because every abnormal metric can be tied back to a concrete event.
This idea mirrors the best practices in embedded reporting: visible summaries are helpful only if the underlying data is accessible. Observability is the same concept applied to integrations.
Security, privacy, and access control in observability systems
Logs and traces can become PHI leakage points
One of the most common observability mistakes is treating telemetry as harmless metadata. In healthcare, telemetry can contain names, IDs, notes, or contextual clues that become regulated data. Redaction should be applied at ingestion time, and raw payload storage should be strictly controlled and justified. Do not allow developers or support staff to casually search production logs that may contain PHI.
Observability platforms should support field-level masking, access segmentation, and retention policies. If your tools cannot enforce those controls, the architecture should not rely on them for PHI-bearing workflows. Security-by-design is not optional here; it is part of the integration’s trust model.
Protect the audit layer from tampering
Audit data is only useful if it can be trusted. Store audit events in systems with immutability or tamper-evident controls, and monitor for unusual access patterns to the audit store itself. The people who can read audit logs should not also be able to rewrite them. This separation of duties is a basic control, but in fast-moving integration projects it is often under-implemented.
For a useful analog, think about security basics for connected environments: the visible device is only as safe as the hidden account and access controls behind it. Audit trails deserve the same rigor.
Test observability controls as part of security testing
Do not limit testing to functionality. Include log-scrubbing validation, audit-record completeness checks, role-based access tests, and replay simulations. Verify that a user without the correct permission cannot view PHI in logs, and that a denied action still creates a valid audit record. Test retention and deletion behavior as well, because compliance depends on those lifecycle rules as much as on real-time capture.
If your team already runs security and compliance reviews, add telemetry verification to the checklist. Observability is part of the attack surface and part of the control surface at the same time.
Tooling patterns, operating model, and maturity roadmap
Choose tools that fit the integration architecture
Different integration styles demand different tooling. Real-time APIs may need distributed tracing and low-latency alerting. Batch jobs may need reconciliation reports and freshness monitors. Event-driven pipelines may need message lag, dead-letter queue tracking, and replay controls. The right stack may include APM, centralized logging, SIEM integration, metrics storage, and an observability UI that can correlate all three signals.
What matters most is interoperability. If your observability tools cannot ingest source IDs, correlation IDs, and business identifiers from the middleware, you lose half the value. Evaluate tools on their ability to support healthcare-grade auditability, not just generic SaaS telemetry. That is especially important when integrating across platforms with different operational models and API conventions.
Define ownership across platform, integration, and compliance teams
Observability fails when nobody owns the signal. Platform teams usually own the collection and storage layer, integration teams own the workflow-specific metrics and traces, and compliance/security teams own audit requirements and access rules. A clear RACI model prevents gaps in which everyone assumes someone else is responsible for the alert, the dashboard, or the log-retention policy. Ownership should also extend to on-call responsibilities and post-incident corrective actions.
For mature organizations, the ideal state is a shared operating model: engineering can troubleshoot quickly, compliance can verify controls, and business stakeholders can understand service health without opening a ticket. That balance is what turns observability from a technical feature into a business capability.
Measure maturity in stages
Most teams move through four stages. Stage one is basic logging, where teams can see errors but cannot reliably correlate them. Stage two adds metrics and simple alerts. Stage three adds distributed tracing, reconciliation, and role-based audit views. Stage four adds continuous compliance evidence, automated exception handling, and incident playbooks that are tested regularly. The goal is not perfection on day one; it is progression with discipline.
As you mature, compare your current process to the rigor used in inclusive research systems and other high-trust operational environments. The common thread is consistent process, transparent evidence, and deliberate access control.
Implementation checklist for Veeva–Epic observability
Minimum viable controls
If you are starting from scratch, implement these controls first: correlation IDs across all hops, structured logs with redaction, basic latency and error metrics, reconciliation reports, PHI audit events, and a simple incident runbook. That set gives you enough data to detect, triage, and explain the most common failures. It also establishes the foundation for stronger controls later.
Do not wait for the perfect toolchain. A disciplined implementation with modest tooling is better than a sophisticated platform that nobody has configured properly. The most important thing is to capture evidence consistently from the beginning.
Nice-to-have controls that pay off quickly
Next, add distributed traces, SLA dashboards by workflow, exception queues, field-level lineage, and anomaly detection on reconciliation drift. These controls reduce manual investigation and catch subtle issues earlier. For high-volume integrations, they also help prevent cost blowouts by identifying noisy retries and unnecessary reprocessing.
As the environment scales, consider linking observability events to change management records. That gives you a clean bridge between deployment history and operational behavior. If a problem starts after a release, the evidence should be easy to find.
Long-term controls for regulated maturity
Over time, aim for tamper-evident audit storage, policy-based access to telemetry, workflow-specific SLOs, automated incident enrichment, and machine-readable lineage. Mature teams also create periodic control reviews to verify that the observability system itself is still aligned with legal and operational requirements. The best programs treat observability as a living control system, not a one-time project.
That is the end state for healthcare integration teams: a system that is not only connected, but explainable. In a domain where trust is everything, that explainability is a strategic asset.
Related Reading
- MLOps for Clinical Decision Support: Building Explainable, Auditable Pipelines - Explore governance patterns that transfer well to healthcare integration monitoring.
- Automating Signed Acknowledgements for Analytics Distribution Pipelines - Learn how to prove receipt, acknowledgment, and downstream delivery.
- Vendor Diligence Playbook: Evaluating eSign and Scanning Providers for Enterprise Risk - Useful for evaluating audit-friendly vendors and control surfaces.
- Automating Geospatial Feature Extraction with Generative AI - A strong example of tracing data through complex processing stages.
- Internet Security Basics for Homeowners: Protecting Cameras, Locks, and Connected Appliances - A simple but effective analogy for access control and telemetry protection.
FAQ: Observability for CRM–EHR Integrations
1) What is the difference between logging, tracing, and audit trails?
Logging captures discrete events and errors, tracing follows a transaction across systems, and audit trails provide compliance-grade evidence of access or action. In a Veeva–Epic integration, all three are necessary. Logs help engineers diagnose failures, traces help operators reconstruct flows, and audit trails help compliance teams prove who accessed PHI and why. If you only have one of these, the picture is incomplete.
2) How do I trace FHIR transactions without exposing PHI?
Use correlation IDs, resource types, timestamps, status codes, and redacted payload metadata instead of full raw content. Store sensitive fields only in secured, tightly access-controlled locations if absolutely necessary. The operational goal is to understand where the request went and whether it succeeded, not to duplicate clinical data across every telemetry system. Redaction should be enforced at ingestion so sensitive values do not spread.
3) What should an SLA for Epic-to-Veeva integration include?
An SLA should include end-to-end delivery time, freshness, completeness, error rate, and recovery expectations. It should be defined per workflow, because not every data stream has the same urgency. For example, a patient-support escalation may require near-real-time delivery, while a nightly enrichment feed can tolerate more delay. Good SLAs are measurable, testable, and tied to business impact.
4) What is the best way to investigate data inconsistencies?
Start with reconciliation, then drill into trace IDs, transformation logs, and source/destination acknowledgments. Check whether the issue is a source data change, mapping drift, duplicate replay, or destination validation failure. The fastest investigations use a prebuilt runbook and a consistent identifier strategy. Without those, teams end up manually comparing records across systems, which is slow and error-prone.
5) How often should we review audit trails and observability controls?
Review them continuously for security and operations, and formally at least quarterly for compliance and architecture alignment. Audit trails should be sampled during access reviews, incident investigations, and control attestations. Observability dashboards and alert thresholds should also be revisited after major releases or data model changes. If the integration evolves, the monitoring and audit design must evolve with it.
6) Do we really need distributed tracing if our integration is mostly batch-based?
Yes, if you need to explain where failures or delays occur across multiple systems. Batch pipelines may not need the same granularity as synchronous APIs, but they still benefit from correlation IDs, stage-by-stage timestamps, and reconciliation checkpoints. Even a coarse trace can be extremely valuable when a nightly job silently misses records. The more regulated the workflow, the more useful traceability becomes.
Related Topics
Marcus Bennett
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Designing HIPAA-Compliant AI Agent Architectures with FHIR Write-Back
Iterative Self-Healing for Enterprise LLM Agents: Implementing Feedback Loops Without Causing Data Drift
Agentic-Native SaaS: Architecting Products That Run on the Same AI Agents You Sell
APIs as Strategic Assets: How Health Systems Should Evaluate and Monetize API Programs
Building Hybrid Cloud Architectures for Healthcare: Balancing Security, Latency, and Compliance
From Our Network
Trending stories across our publication group