Overview
On 31st October 2025, our services experienced a widespread connectivity outage due to an unannounced configuration change made by our upstream network provider. This change temporarily blocked all inbound and outbound network traffic, resulting in service interruptions across multiple systems.
We sincerely apologize for the impact this had on our clients and want to share details about what occurred, how it was resolved, and the steps being taken to prevent similar issues in the future.
What Happened
Our upstream provider implemented a change to their firewall configuration without prior notification. This change modified the default network policy from an open configuration that allowed all traffic to a restrictive one that blocked all traffic by default.
Because this adjustment was made at the provider level, it immediately affected all of our connected systems and services. The change prevented normal communication between our application servers, APIs, and databases, leading to widespread service disruptions.
Impact
Temporary loss of access to several client-facing applications and APIs
Disruption of internal and external system communications
Delayed processing of transactions and service requests during the outage window
All affected systems were fully restored once the issue was identified and resolved.
Resolution
Upon detecting the outage, our network engineering team engaged with the upstream provider to investigate and isolate the issue. After confirming the cause—a global deny-all firewall policy introduced at the provider level—our teams worked together to implement the necessary allow rules and restore connectivity.
Due to the number of systems and services impacted, recovery was performed in phases to ensure accuracy and stability. Full service functionality was restored by 1st November 2025 7:36 pm UTC.
Preventative Measures
We are working closely with our upstream provider to ensure greater transparency and coordination for future network-level changes. In addition, we are implementing internal safeguards to improve resilience and minimize potential downstream impact from external providers.
Our ongoing and planned improvements include:
Establishing enhanced communication and change notification protocols with our upstream provider
Creating redundant network pathways to mitigate dependency on a single configuration layer
Expanding real-time monitoring and alerting to detect provider-level changes more rapidly
Reviewing and strengthening incident response procedures to reduce restoration time
Moving Forward
We fully understand how critical service availability is to your operations. Although this incident originated outside our direct infrastructure, we take full responsibility for maintaining a resilient service environment and ensuring timely communication in the event of external disruptions.
We appreciate your patience and understanding during this event and remain committed to transparency, reliability, and continuous improvement.
If you have any questions or would like additional details about this incident, please contact our support team at support@newhorizoncloud.com.