Cybersecurity Risk and Municipalities:
Created by MIT’s Internet Policy Research Initiative Distributed by Massachusetts Interlocal Insurance Association
The Massachusetts Institute of Technology is engaged in ongoing research to collect, analyze, and socialize current cybersecurity risk to better inform interested stakeholders and policy-makers. MIT’s SCRAM platform (scram.mit.edu) securely collects anonymously submitted data about cybersecurity controls and incidents to produce easily understood aggregated risk metrics.
Building on this effort, MIT is partnering with Massachusetts municipalities through the Massachusetts Municipal Association and the Massachusetts Interlocal Insurance Association to help track progress over time of implemented cybersecurity controls.
Cyber Insurance Coverage:
Budgeting and Resources:
Step 1: Population
Step 2: Maturity Levels
For each of the 10 control categories noted in the first column titled “Category” (Column A for those using Excel), there are associated security controls in the control column to the right. For each control, please mark an “x” in the one gray box per row that best describes your current maturity level
We recognize that these control maturity level options, ranging from “Not Implemented” to “Fully Implemented”, can be interpreted in a variety of ways. Implementation level should not be based on your individual municipality’s progress vs. planned end-state, but rather an implementation level with respect to the ideal end-state recommended by frameworks. Ask yourself, “If I had unlimited time, expertise and resources, what would the end-state for this control’s implementation look like?” The responses should be based on this perspective.
Please find below maturity level rating definitions to help guide your responses to the survey:
- Not Implemented: The described control does not exist within your organization in any form.
- Partially Implemented: Considered 50% or less based on one or more of the following dimensions:
- Network surface area: The control is implemented on some parts of your network but not others (e.g., anti-virus is deployed on Windows endpoints but not Linux, data is encrypted locally but not in the cloud)
- User adoption: The control has been adopted by only a portion of end-users (e.g., MFA is used on 50% or fewer employee cell phones, cybersecurity training is provided to only a small subset of employees)
- Capabilities: The control is implemented in a fashion that does not fully utilize the security capabilities as expected (e.g., patching is done in a manual fashion once a quarter, whereas weekly automated patching is a preferred best practice)
- Largely Implemented: Considered 80% or less based on the same dimensions described above.
- Fully Implemented: 95% implementation or higher. There is an understanding that 100% control implementation is not always realistic or verifiable.
Step 3: Incidents and losses
Steps 3 and 4 only need to be filled out if the municipality had a significant incident in 2019, 2020, or 2021 with a monetary loss of greater than $1,000. We will use this data to narrow in on which security control failures are leading to the highest losses. The first question asks for the total number of incidents over the three-year period. The second question asks for the sum of monetary losses from all significant security incidents over the timeframe. The estimated losses do not need to be overly precise. For example, losses for two incidents could be estimated to total roughly $15,000.
- A “significant incident” is defined as an attack against your municipality with a minimum loss amount of $1,000.
- Multiple endpoints, network segments, platforms or accounts hit in an attack that require remediation only count as one incident.
- Three years refers to 2019, 2020, and 2021.
- A “cyber loss” refers to the financial impact of an incident due to:
- Theft or fraud
- Loss of data or systems that need to be recovered or replaced, or
- Associated legal fines or fees.
Step 4: Identifying failures
In the column labeled “Step 4: Identifying failures” (Column J in Excel), please mark an “x” in the gray box for up to five controls that were associated with cyber incidents. For example, if you want to select “3a. Encrypt data in transit”, mark an “x” in the same row under the “Step 4: Identifying failures” column.
In terms of criteria, the cyber incidents associated with these failed controls must have 1) incurred financial losses and 2) taken place within the past three calendar years (2019, 2020, or 2021). When responding, you should ask yourself, “Which controls were responsible for the most financial losses?” A “top control”, as referenced in the Step 4 description, can be one of the following:
- A control that was bypassed during an incident that incurred financial losses. If your municipality experienced several incidents, prioritize this guidance in relation to the incident(s) with higher values of financial loss.
- A control that could have prevented the attack, but is not yet implemented. We understand that this is a more ambiguous definition, so only use this rationale in obvious, clear-cut cases. For example, if an attack resulted in losses because a system was destroyed and no backups existed, the controls associated with the category “6. Backups” may be good candidates for top controls that failed and led to losses.
Step 5: Check for errors
The SCRAM platform only processes encrypted data that the researchers at MIT have no way to read. For that reason, it is important that the data is complete and correct before it is submitted. To aid municipalities with this, we have created green and red status markers on each of the inputs in the spreadsheet. They notify when something isn’t formatted correctly or data is missing. Important: The SCRAM platform will only accept files that show a green “Ready” and checksum equal to 100 at the top of the page. If the checksum is higher it means there are errors in the document that still need to be corrected. Please contact us at the email address below if you have any questions.