For more details about the vision for this area of the product, see the Plan stage page.
This team is currently shared between Plan:Portfolio Management and Plan:Certify.
|John Hope||Backend Engineering Manager, Plan:Portfolio Management & Plan:Certify|
|Felipe Artur||Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Jarka Košanová||Senior Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Jan Provaznik||Senior Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Charlie Ablett||Senior Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Eugenia Grieff||Backend Engineer, Plan:Portfolio Management & Plan:Certify|
|Donald Cook||Frontend Engineering Manager, Plan|
|Kushal Pandya||Senior Frontend Engineer, Plan:Portfolio Management, Plan:Certify|
|Eulyeon K.||Frontend Engineer (Intern), Plan|
|Justin Farris||Group Manager, Product Management, Plan|
|Rajat Jain||Frontend Engineer, Plan:Portfolio Management & Plan:Certify|
|Florie Guibert||Frontend Engineer, Plan:Portfolio Management & Plan:Certify|
|Axel García||Frontend Engineer, Plan:Portfolio Management & Plan:Certify|
|Désirée Chevalier||Software Engineer in Test, Plan:Project Management (primary) & Plan:Portfolio Management (secondary)|
|Alexis Ginsberg||Senior Product Designer, Plan:Portfolio Management|
|Keanon O'Keefe||Senior Product Manager, Plan:Portfolio Management|
|Marcin Sędłak-Jakubowski||Technical Writer, Plan|
This chart shows the progress we're making on hiring. Check out our jobs page for current openings.
Since we share a backend team between the Plan:Portfolio Management and Certify groups, we have a combined metrics dashboard. This is intended to track against some of the Development Department KPIs, particularly those around merge request creation and acceptance. From that dashboard, the following charts show MR Rate and Mean time to merge (MTTM) respectively.
The following chart shows a breakdown of MRs by category (omitting Security, for now). Totals may vary slightly from overall throughput as some MRs may have more than one throughput label.
We have an application performance dashboard (internal) that tracks the performance of the parts of GitLab for which we are responsible. This dashboard is shared between the Portfolio Management and Certify groups for now.
We use a lightweight system of issue weighting to help with capacity planning, with the knowledge that things take longer than you think. These weights are used for capacity planning and the main focus is on making sure the overall sum of the weights is reasonable.
It's OK if an issue takes longer than the weight indicates. The weights are intended to be used in aggregate, and what takes one person a day might take another person a week, depending on their level of background knowledge about the issue. That's explicitly OK and expected.
These weights we use are:
|1||Trivial, does not need any testing|
|2||Small, needs some testing but nothing involved|
|3||Medium, will take some time and collaboration|
|4||Substantial, will take significant time and collaboration to finish|
|5||Large, will take a major portion of the milestone to finish|
Anything larger than 5 should be broken down if possible.
We're discussing a possible change to the weight scale we use.
We look at recent releases and upcoming availability to determine the weight available for a release.
Estimating bugs is inherently difficult. The majority of the effort in fixing bugs is finding the cause, and then a bug be accurately estimated. Additionally, velocity is used to measure the amount of new product output, and bug fixes are typically fixes on a feature that has been tracked and had a weight attached to it previously.
Because of this, we do not weigh bugs during ~"workflow::planning breakdown". If an engineer picks up a bug and determines that there will be a significant level of effort in fixing it (for example, a large migration is needed, or we need to switch state management to Vuex on the frontend), we then will want to prioritize it against feature deliverables. Ping the product manager with this information so they can determine when the work should be scheduled.
To assign weights to issues in a future milestone, we ask team members to continually weight and break-down issues in ~workflow::planning breakdown that don't have a ~"Breakdown Sufficient" label, especially pieces of work in which they have experience or which belongs to their group.
Contributions that add new information or insight are welcome, even if they don't consistute a complete break-down. When a discussion fails to meet a conclusion in a timely manner, include the PM immediately so they can clarify requirements or cut scope.
Often new complexity is revealed when development starts or as it progresses. This is normal. Team-members should re-assess weights when new information becomes clear and alert the PM or EM when delivery within the milestone is at risk.
To weight issues, team-members should:
Points of weight delivered by the team on the last three milestones, including rolling average. This allows for more accurate estimation of what we can deliver in future milestones. Full chart here.