GitLab
A single application for the entire DevOps lifecycle
GitLab Professional Services
Accelerate your software lifecycle with help from GitLab experts
Popular GitLab use cases
Remote Work Continuous Integration (CI/CD) Source Code Management (SCM) Out-of-the-box Pipelines (Auto DevOps) Security (DevSecOps) Agile Development Value Stream ManagementGitLab
A single application for the entire DevOps lifecycle
GitLab Professional Services
Accelerate your software lifecycle with help from GitLab experts
Popular GitLab use cases
Remote Work Continuous Integration (CI/CD) Source Code Management (SCM) Out-of-the-box Pipelines (Auto DevOps) Security (DevSecOps) Agile Development Value Stream ManagementIf you're a GitLab team member and you're observing issues on GitLab.com or working with users who are reporting issues, please email gitlab-production-imoc@gitlab.pagerduty.com. This will immediately page the Incident Manager On Call.
If you're a GitLab team member looking for who is currently the Engineer On Call, please see the Who is the Current EOC? section.
If you're worried about a security problem: How to engage security
Incidents are anomalous conditions that result in—or may lead to—service degradation or outages. These events require human intervention to avert disruptions or restore service to operational status. Incidents are always given immediate attention.
The goal of incident management is to organize chaos into swift incident resolution. To that end, incident management provides:
There is only ever one owner of an incident—and only the owner of the incident can declare an incident resolved. At anytime the incident owner can engage the next role in the hierarchy for support. With the exception of when GitLab.com is not functioning correctly, the incident issue should be assigned to the current owner.
It's important to clearly delineate responsibilities during an incident. Quick resolution requires focus and a clear hierarchy for delegation of tasks. Preventing overlaps and ensuring a proper order of operations is vital to mitigation. The responsibilities outlined in the roles below are cascading–and ownership of the incident passes from one role to the next as those roles are engaged. Until the next role in the hierarchy engages, the previous role assumes all of the subsequent roles' responsibilities and retains ownership of the incident.
Role | Description | Who? |
---|---|---|
EOC - Engineer On Call |
The EOC is the usually the first person alerted - expectations for the role are in the Handbook for oncall. The checklist for the EOC is in our runbooks. If another party has declared an incident, once the EOC is engaged the EOC owns the incident. The EOC can escalate a page in PagerDuty to engage the IMOC and CMOC. | The Reliability Team Engineer On Call is generally an SRE and can declare an incident. They are part of the "SRE 8 Hour" on call shift in PagerDuty. |
IMOC - Incident Manager On Call |
The IMOC is engaged when incident resolution requires coordination from multiple parties. The IMOC is the tactical leader of the incident response team—not a person performing technical work. The IMOC assembles the incident team by engaging individuals with the skills and information required to resolve the incident. | The Incident Manager is an Engineering Manager, Staff Engineer, or Director from the Reliability team. The IMOC rotation is currently in the "SRE Managers" Pager Duty Schedule. |
CMOC - Communications Manager On Call |
The CMOC disseminates information internally to stakeholders and externally to customers across multiple media (e.g. GitLab issues, Twitter, status.gitlab.com, etc.). | The Communications Manager is generally member of the support team at GitLab. Notifications so the Incident Management - CMOC service in PagerDuty will go to the rotations setup for CMOC. |
These definitions imply several on-call rotations for the different roles.
#alerts
and #alerts-general
are an important source of information about the health of the environment and should be monitored during working hours.production
tracker. See production queue usage for more details.The Situation Room Permanent Zoom
. The Zoom link is in the #incident-management
topic.
The Situation Room Permanent Zoom
as soon as possible.
#production
. If the alert is flappy, create an issue and post a link in the thread. This issue might end up being a part of RCA or end up requiring a change in the alert rule.At times, we have a security incident where we may need to take actions to block a certain URL path or part of the application. This list is meant to help the Security Engineer On-Call and EOC decide when to engage help and post to status.io.
If any of the following are true, it would be best to engage an Incident Manager:
In some cases, we may choose not to post to status.io, the following are examples where we may skip a post/tweet. In some cases, this helps protect the security of self managed instances until we have released the security update.
To page the Incident Manager on call you can:
/pd trigger
in the #production
channelFor serious incidents that require coordinated communications across multiple channels, the IMOC will select a CMOC for the duration of the incident during the incident declaration process.
The GitLab support team staffs an oncall rotation and via the Incident Management - CMOC
service in PagerDuty. They have a section in the support handbook for getting new CMOC people up to speed.
During an incident, the CMOC will:
@advocates
handle at the start of an incident.Runbooks are available for engineers on call. The project README contains links to checklists for each of the above roles.
In the event of a GitLab.com outage, a mirror of the runbooks repository is available on at https://ops.gitlab.net/gitlab-com/runbooks.
If you don't have a PagerDuty account and need to find out who the current oncall is, there are two ways you can do it:
@sre-oncall
- at mention this usergroup in Slack and it will ping the current oncall.#production
Slack Channel will tell you this with /chatops run oncall prod
.If you are a GitLab team member and would like to report an incident/anomaly to EOC and page them in, type /incident report
in Slack (e.g #production
) and follow the prompts. Please ensure that the severity of the incident warrants paging in the EOC and that you, as the reporter, stay online until EOC has had a chance to come online and get up to speed.
Type /incident declare
in Slack (e.g #production
) and follow the prompts. The incident declaration is orchestrated through IMA (incident management automation) and has the following capabilities:
The capabilities noted with * are optional and engineer on call can decide which ones to choose depending on severity of the incident.
Email gitlab-production-eoc@gitlab.pagerduty.com. This will immediately page the Engineer On Call.
This is a first revision of the definition of Service Disruption (Outage), Partial Service Disruption, and Degraded Performance per the terms on Status.io. Data is based on the graphs from the Key Service Metrics Dashboard
Outage and Degraded Performance incidents occur when:
Degraded
as any sustained 5 minute time period where a service is below its documented Apdex SLO or above it's documented error ratio SLO.Outage
(Status = Disruption) as a 5 minute sustained error rate above the Outage line on the error ratio graphSLOs are documented in the runbooks/rules
To check if we are Degraded or Disrupted for GitLab.com, we look at these graphs:
A Partial Service Disruption is when only part of the GitLab.com services or infrastructure is experiencing an incident. Examples of partial service disruptions are instances where GitLab.com is operating normally except there are:
If an incident may be security related, engage the Security Operations on-call by using /security
in Slack. More detail can be found in Engaging the Security On-Call.
Information is an asset to everyone impacted by an incident. Properly managing the flow of information is critical to minimizing surprise and setting expectations. We aim to keep interested stakeholders apprised of developments in a timely fashion so they can plan appropriately.
This flow is determined by:
Furthermore, avoiding information overload is necessary to keep every stakeholder’s focus.
To that end, we will have:
#incident-management
room in Slack.#incident-management
channel for internal updatesWe manage incident communication using status.io, which updates status.gitlab.com. Incidents in status.io have state and status and are updated by the incident owner.
Definitions and rules for transitioning state and status are as follows.
State | Definition |
---|---|
Investigating | The incident has just been discovered and there is not yet a clear understanding of the impact or cause. If an incident remains in this state for longer than 30 minutes after the EOC has engaged, the incident should be escalated to the IMOC. |
Identified | The cause of the incident is believed to have been identified and a step to mitigate has been planned and agreed upon. |
Monitoring | The step has been executed and metrics are being watched to ensure that we're operating at a baseline |
Resolved | The incident is closed and status is again Operational. |
Status can be set independent of state. The only time these must align is when an issues is
Status | Definition |
---|---|
Operational | The default status before an incident is opened and after an incident has been resolved. All systems are operating normally. |
Degraded Performance | Users are impacted intermittently, but the impacts is not observed in metrics, nor reported, to be widespread or systemic. |
Partial Service Disruption | Users are impacted at a rate that violates our SLO. The IMOC must be engaged and monitoring to resolution is required to last longer than 30 minutes. |
Service Disruption | This is an outage. The IMOC must be engaged. |
Security Issue | A security vulnerability has been declared public and the security team has asked to publish it. |
Incident severities encapsulate the impact of an incident and scope the resources allocated to handle it. Detailed definitions are provided for each severity, and these definitions are reevaluated as new circumstances become known. Incident management uses our standardized severity definitions, which can be found under our issue workflow documentation.
A near miss, "near hit", or "close call" is an unplanned event that has the potential to cause, but does not actually result in an incident.
In the United States, the Aviation Safety Reporting System has been collecting reports of close calls since 1976. Due to near miss observations and other technological improvements, the rate of fatal accidents has dropped about 65 percent. source
Near misses are like a vaccine. They help the company better defend against more serious errors in the future, without harming anyone or anything in the process.
When a near miss occurs, we should treat it in a similar manner to a normal incident.
~Near Miss
label.