Incident Report Number: 2018-003

Network Services

Ticket Number: INC0116073

What happened?

Network services such as Local Area Network (LAN), University Wireless Service(UWS) and Voice over Internet Protocol (VoIP) Cisco phones experienced connectivity issues.

Who was affected?

Some North Campus clients were potentially affected by this outage or degradation.

The following locations were known to have been impacted:

  • Chemistry
  • Mackenzie Hall
  • Computer Sciences
  • NINT
  • Earth Sciences Building
  • Nîpîsîy House
  • General Services Building
  • PAWS Centre
  • HUB Mall
  • South Academic Building
  • Law Centre
  • Students Union Building
  • Lister Centre


What was the impact?

The affected clients potentially experienced network connectivity issues using LAN, UWS and VoIP Cisco phones. During the issue these services were either degraded or unavailable to the clients depending on location.

What was the timeline of the incident?

Start: 2018/10/09 08:50 – IST analysts initiated a backup of current configurations of several network devices located around North Campus in preparation for device upgrades scheduled for later that evening on Oct 9th, 2018. This was performed by a vendor management tool. 
2018/10/09 09:00 – Client reports and IT infrastructure monitoring tool alerts began to come in indicating service disruptions in different locations around campus and investigation was initiated by IST analysts.
2018/10/09 09:55 – Investigation determined the vendor management tool had inadvertently corrupted the configurations of the affected network devices. Work began restoring configurations.
2018/10/09 10:52 – Service was restored to all but 2 locations, Nîpîsîy House and Mackenzie Hall.
2018/10/09 11:05 – IST analysts arrived at last two sites, Mackenzie was restored to service. Nîpîsîy House was not accessible with the keys provided to the IST analysts. 
End: 2018/10/09 11:56 – Working keys were obtained and service was restored at Nîpîsîy House.


What was the root cause of the incident?

Root cause was unexpected behavior of the vendor management tool used to backup the network devices configurations while preparing for the device upgrades later that evening. For unknown reasons the tool inadvertently modified the configurations. The tool has been used previously without any issues.

What was the work around and resolution for the incident?
Work Around



Manually reverting back to the previous configurations restored services to normal.

What are any recommendations to prevent this incident from occurring again?

A trouble ticket has been opened with the vendor for investigation and the tool will not be used for any future backups until the issue has been resolved.