This is a transcript. For the video, see Understanding the Drupal Security Team, with Michael Hess.

[00:00:00] Michael Meyers: Hello, and welcome to another Tag1 TeamTalk episode, the podcast and blog of Tag1 Consulting. We've got another awesome show for you today. We're going to be talking with Michael Hess, who's one of the leaders of the Drupal security working group and the Drupal security team.

[00:00:13] Michael is going to give us a rare inside look into what they do and how they operate and work. I'm Michael Meyers, the managing director of Tag1 Consulting. Mike, thank you so much for joining us. could you give folks just a quick overview, a brief background, you know, who you are, what your role is in the Drupal Security Team?

[00:00:32] Michael Hess: Sure. So, I have been in IT since I was young. Like most people that I know in this community, I went to the University of Michigan for an undergraduate degree. Got a master's degree, started teaching classes at the University of Michigan. Got involved in Drupal with a nonprofit venture at that point and then kind of moved into using Drupal and a bunch of places, and then got involved with the security team, by finding bugs or security vulnerabilities and submitting them.

[00:01:04] Michael Meyers: Wow. How long have you been doing Drupal now?

[00:01:09] Michael Hess: It feels like a trick question. I, I don't know if I want to like, tell you how old I am.

[00:01:16] Michael Meyers: I've been doing Drupal for, oh my gosh. Almost 17 years. And, and as far as I remember, you've been part of it for most. If not all of that, I, you know, my memory is fuzzy at this point, but it's been a long time.

[00:01:28] Michael Hess: I just looked up my profile and it says that I've had an account on for 14 years and four months.

[00:01:34] I'm fairly certain that I was involved in Drupal two years before I created an actual account and started like, contributing back. That was a mistake by the way, I should've done that immediately, but you know, I can't go back in time, 14 years.

[00:01:49] Michael Meyers: So what is the Drupal security team? And what's your role, on the security working group and the security team?

[00:01:57] What is the different, like there's a security working group. There's a security team. What's going on?

[00:02:03] Michael Hess: So the Security team is actually one of the, there's actually, I think it's the oldest like sub-grouping of community members working on a specific project with the exception of maybe core maintainers, but we have lots of different groups in Drupal that do things.

[00:02:20] And I think, and someone could correct me on my history here. And I'm sure someone will, that the security team may have been the one of the first, I think it may be other than core committers. It may be like the first one that's still active and running. You know, we had the documentation group that got created.

[00:02:38] There's the copy, the copyright group that's around. And then there, you know, and as, as Drupal has matured, we have a lot of these specialized groups that are involved. The security team is responsible for the security of the code base itself. We don't take a look at individual people's sites.

[00:02:56] We don't even proactively necessarily scan for vulnerabilities, but when there's a security issue that's reported to us, we are the group that is responsible for the triage and the process to get that fixed. The security working group is somewhat newer than the security team, and it is really the liaison group between other groupings within Drupal security team itself has 30 people.

[00:03:23] The working group is between 30ish people. The working group is between two and three folks at any time. And they're, you know, we take the concerns of the security team and figure out how and the processes to get things fixed. And so that might be, Hey, we need a new feature added to to handle this type of thing.

[00:03:44] How do we go about doing that? And so, you know, we take those needs and we bring them up the food chain, so to speak. 32 people on the security team currently.

[00:03:54] Michael Meyers: Wow.

[00:03:55] And is everybody on the team a volunteer?

[00:03:59] Michael Hess: Yes. And so I say yes, because everybody on the team is a volunteer that the Drupal project does not pay anybody to be a team member.

[00:04:10] So you can't, you know, there's no job application with like the Drupal Association to become a Drupal security team member and receive a check. So everybody is a volunteer on the security team. Having said that, there are some organizations that will pay for the volunteer time to the security team.

[00:04:28] So they get sponsored to be on the security team. Now I should be careful in saying that because it implies that, you know, if you pay for a security team member, you might get access to information. When that occurs, the company that's sponsoring, it does not get any information back from the security team.

[00:04:44] We do thank companies who sponsor people's time in security advisories, shows up on a block on the side of the page. But there's not like an information sharing thing there, you know, it's, it's like sponsoring somebody to work on a module. you know, and obviously if you're in a, you know, a sales agreement, a sales process thing, you could talk about, you know, that we have X number of people who are on the security team.

[00:05:06] And therefore when we're giving you, when we're selling you that we know how to do security, we speak with authority. But there's no, you know, Hey, if you contract with XYZ company, we have six people on the security team. And those six people will tell you all the secrets of Drupal, that would be grounds to be removed from the security team.

[00:05:25] Very. Very quickly. and so the, you know, the process of the security team overall, we have volunteers, and the volunteers, you know, some have specializations, you know, there's, there's folks that, you know, really are good with this area of the code base or are good at, you know, the integration problems where different subsystems combined.

[00:05:48] And we have generalists who, you know, will take an issue and do some triage. Maybe they'll do some testing, maybe they'll do code reviews. and so, yeah, that's kind of the overview of the security team.

[00:06:01] Michael Meyers: You know, given these different specialties and I'm sure the admin flow over time, but you know, is it possible to generalize?

[00:06:09] Like if someone is participating in the Drupal security team on average, you know, are we talking, they're putting in 20 hours a week, you know, two hours a week, you know, does ebb and flow.

[00:06:20] Michael Hess: It definitely ebbs and flows. And you know, like with most things with COVID, that's had an, you know, that's had an impact on this, but for the most part, you know, we have people who put in, we like to ask for a set block of time a month and that's kind of in the application process.

[00:06:37] But in reality, you know, once you're on the team, we will accept what you have available. You're a volunteer, you're not an employee, you know, and things happen. so we have people who do everything from, you know, five hours a month to, you know, a hundred hours a month. and it depends on what the, you know, what's happening that month.

[00:06:56] We may have a large release where we need to pull in a lot more people who are going to be reading over a core release, for example, or we may have some cleanup tasks that we need to do. And the, you know, we'll do, we'll get together. Unfortunately, not in person at the moment. But previously we, you know, we might arrange a security team sprint or a couple of us would get together and kind of work through issues.

[00:07:17] You know, it may be that you get assigned an issue that looks like it may be easy on the surface. And with most things, when you start digging into it, it's quite complex. And so, you know, your two hour tasks is now a 15 hour tasks, cause that never happens in software development. But it's, you know, it's, it is the, you know, the average time, I don't think there really is an average.

[00:07:41] It also depends on if you're receiving support from your company to be putting time in. Are you a, you know, are you a volunteer? That's doing this out of personal time or are you getting the, you know, is, is your employer sponsoring you?

[00:07:54] Michael Meyers: Definitely some sort of like, minimum monthly obligation to stay active in the group.

[00:08:00] Michael Hess: With some exceptions. Sure. I mean, you know, we're all humans and sometimes people need to take breaks and whatever it is, you know, we're all on a Slack channel and there's often, you know, your, your obligations may be in there. But how we, you know, how, how the, you know, how the team works together is, is a process that, you know, is kind of organic.

[00:08:22] You know, we, we do, because there is sensitive information on the team. If people aren't active for an extended period of time, we typically have a conversation with them and say, Hey, you know, we get some assistance here. Once you are a security team member, if you need to step down, you know, for whatever, we had a security team member whose wife had a baby.

[00:08:42] It was like, I don't have time to do this anymore. You know, you're always welcome to come back. if you need to leave and, you know, come back at a later date, you know, we're all human.

[00:08:52] Michael Meyers: Yeah, definitely. you, you mentioned that it's very organic. Are there any, formal structures or roles? Like, how does, I mean, it's, it's, it's, it's funny, I've been in the Drupal community for almost 20 years now and you know, you guys are a bit of a black box to me.

[00:09:09] And so I was really excited about this conversation because I've always wanted to know, like, you know, what happens like someone sends in a vulnerability report. What happens then?

[00:09:22] Michael Hess: So two separate questions kind of let's take the first, the second question first, though. What, what is the, like the workflow and I'll share with you after we're done.

[00:09:32] I've got a diagram of this, actually. The, but effectively what will happen is we receive reports and they typically fall into three categories. We have maybe four, we have a security conscious person who is not really part of the Drupal community, who thinks they found an issue with Drupal and they submit it.

[00:09:54] And sometimes those are, Hey, I installed Drupal. I logged in as admin, you know, UID one and I turned on full HTML and I can now hack my site because I can get cross-site scripting to ride. And so, you know, some, you know, that's one case of things. We, we get the reports from people who are security researchers and who, you know, found an actual issue, that isn't, you know, the, you know, you used an admin account to do something.

[00:10:21] We get people who are community members who are trying to find issues, or, you know, may have found one by mistake or, you know, they work for a company and their security team is reporting something to them. And then sometimes they'll have module maintainers who are responsible for their own module code, come to us and say, Hey, I made a security boo boo, can, can I get some help here? Unless it's the module maintainer. The workflow typically is that the issue will come in and it'll get triaged. And sometimes that's as easy as just reading the report and saying, yup, this seems like a valid issue. Sometimes there's a little bit more testing that goes into it.

[00:10:58] We'll actually try to reproduce it. It depends on the issue and where the, the issue is involved. At that point we will bring in the code owners and, you know, or the code managers, and you know, that core that, you know, there's, there's framework owners, there's, there's subsystem maintainers.

[00:11:19] And so, you know, we find out where in core that belongs. For modules, it's the maintainers of the module. And you know, typically there's an automated message that goes out that basically says, Hey, there's a security issue. Don't fix anything. Read over this, propose a patch.

[00:11:33] And at the moment is still on a patched based workflow. We are looking to change that now that has GitLab. That's amazing, but for now we are still using patches. and what we'll do is we'll work with the code maintainer to fix the problem, in private and we'll agree on a fix.

[00:11:58] And sometimes that's, you know, if it's the maintainer who's reporting the issue, oftentimes it comes with a patch. Hey, I broke this. Here's how to fix it. Anybody have any feedback. We try very hard to keep the patches that get fixed to security issues only. We don't want, you know, especially in contrib, there's sometimes the thing of, Oh, well, I've also got all these other changes.

[00:12:19] I want to release the security patch. And I also want a 500 line diff of, of lots of other things that I'm fixing at the same. In a security release. You know, we, we want the, the fix to just fix the security issue because it makes it easier for folks that need to do the updates. Hmm. After we've kind of gotten to the point where the code is ready, we will have a discussion around the security advisory.

[00:12:42] And so the tooling we have helps us draft these internally. We will go through and, you know, write up the security advisories, which would, which is what gets published to And once we've got them written up, we then go through this publishing ritual, the maintainer will commit their code, create the releases.

[00:13:03] We will take care of putting the security advisory on, linking to those releases. And then we publish both the release and the advisory at the same time on Wednesday and send an email through Twitter and typically make an announcement in Slack.

[00:13:19] Michael Meyers: Can you share the kind of volume? I don't know if that's information that you can disclose, but like, are we talking tens of reports, hundreds, thousands?

[00:13:27] Michael Hess: It's not hundreds..

[00:13:29] It, it really, it seems to go in cycles. You know, I will say that we got at the start of COVID, we got a lot of false reports, which was interesting. And I think, you know, people were working at home and they were trying to find issues and, you know, I, and when I say false reports, I want to be really clear, a false report isn't bad. Like if people are looking at a problem and trying to find holes in it, and they submit something and it's not valid for whatever reason. That's like, you know, that's not a bad thing necessarily. That's a good thing. They were spending time on it. We're giving them a response. Like, you know, there's a lot of, you know, there, there's a grouping of people who do a lot of spamming around reports.

[00:14:09] And you know, there's a line between I'm spamming a project with the report and I'm writing the same thing to 20 different projects and it's the exact same came right up. You know, you're missing demarc records. This is a major security issue. You must fix it immediately. Okay.

[00:14:27] And like they've clearly written an automated thing. That's querying a bunch of domains and sending an email based on the response from the you know, the, the DNS query. That's one side in which, okay. That's a little like abusive, but if someone's going to download Drupal and spend time finding something in it, we encourage that.

[00:14:44] Even if it's a false report. And, you know, we, we try to be careful that when we're telling people that the reports they get. Aren't you know, are invalid that's okay. Try again. Eventually you'll find something, we all can't start off finding security issues. You know, I teach a class on this at U of M and it's interesting if you know, you get the question, how long does it take to find a valid, security issue?

[00:15:08] You want to guess?

[00:15:10] Michael Meyers: I mean, it could take seconds, it could take years. I would imagine.

[00:15:13]Michael Hess: That is correct.

[00:15:15] There's not an average time there. Sometimes it takes, you know, sometimes you look at code, you think about it in your head and, Oh, Whoa, wait a minute. That's SQL injection there. and sometimes you can look at a code and you've got this instinct and it's like, I think there's something wrong there, but I don't know what.

[00:15:34] And so, yeah, this is a, this is, you know, this is the fun of finding security bugs. So I'm always encouraging people to find security bugs. If you find me at a conference, I am more than happy to help people, you know, get started finding security bugs. In fact, anybody on the security team is probably is happy to help with that.

[00:15:53] Michael Meyers: And I would imagine by its nature, the majority of things don't pan out, you know, they turn out not to be problems and bugs, you know, and that's a good thing.

[00:16:03] Michael Hess: The majority of issues we get probably fall into that category. You know, the, the ones we get that are, that are bugs. you know, it's, we typically, you know, after having read thousands of these, you can tell pretty quickly, you know, how serious something is.

[00:16:20] You get like a sixth sense for reading the vulnerability reports. At least I do. I don't know if other people do.

[00:16:29] Michael Meyers: So what's the silliest bug report. Is it the, I logged in as an admin and let me do anything.

[00:16:35] Michael Hess: I think the worst one was I edited a PHP, ------, and now I can hack my site and it's like, but you know, even in that instance, it wasn't, you know, I mean, yes, that's an invalid report, but it's writing back and you know, it's not saying, well, that's dumb.

[00:16:53] Why would you do that? Like, you know, it's, there's an education part there. Well, how were you able to a, you know, edit PHP code, here's a Drupal site. How would you edit PHP code on the, you know,, if you don't have access to the server and you know, then there's an education component there.

[00:17:10] And we want to be careful that we're not discouraging people from, you know, trying to find issues because they find something and, you know, they think they found something and they haven't.

[00:17:21] Michael Meyers: I have another question. Do people try to bypass the security team and are there any examples of that? You know, so maybe for example, instead of reporting a vulnerability to you, you know, to you meaning the security team they posted publicly, or, you know.

[00:17:39] Michael Hess: We don't have a lot of examples of that that I can think of at least nothing recent, you know, where, and I think bypass might be the wrong word here.

[00:17:49] There are people who are unaware of the, of what we do and how we do it. So they might joke, you know, they're a new module maintainer and they just fix their module. we've got some controls in place, technically that prevent that from happening. so like if you're publishing a module and you check the security release button, it won't publish your module until a security team member comes in and says, yup, that's the security release.

[00:18:14] And you know, we've, the process has been followed around that. You know, one of the things that we'll get threats on is we'll get, you know, security researchers who were like, Hey, I found this bug it's moderately critical and you have a week to fix it. And if you don't fix it within a week, you know, I'm gonna publicly talk about this.

[00:18:34] That happens occasionally. You know, for the most part, almost all security researchers I've worked with have been reasonable people. And, you know, we have, you know, Drupal core does not release every week. We don't have a security window for Drupal core every week. And so if you talk to them and you know, you engage them in the process, they are typically, you know, reciprocal.

[00:18:52] There was one group that we were working with. This was a while ago where they submitted an issue and, you know, we fixed the issue and we posted a patch and we asked them to test it and they thought that was really cool because normally they're used to just reporting bugs and, you know, companies just fixing and say we fixed it.

[00:19:11] And they're like, we get to be involved in the process of fixing the issue. This is cool, and it's like, well, welcome to open source.

[00:19:21] Michael Meyers: I didn't know that. So when, when you, as a module owner, do a release, you have to check a box, which says this is, or isn't a security release. And if I check it and say it is then someone on the security team has to sign off on it.

[00:19:35] Michael Hess: I wouldn't use the words to sign off. So I don't remember the exact wording of it, but what happens with security releases is they're published with security advisories. And so the security team is responsible for the security advisory, the module maintainers is responsible for the release itself, but we want to publish them in sync.

[00:19:54] And so what'll happen is, is we will tell the module maintainers to go in and cut a release just like normal, but check that security box. And it does everything that it normally does, but it won't actually publish the release note. And so when we publish the SA, we will then go publish the release note.

[00:20:11] Actually, we do it in the other order, but, we'll publish the release note and then publish the SA so that when you get the email, you can click on the link to the release and you don't get a, an access denied message when you go to the, release note page. So yeah, that, that, and you know, there, there have been times when we've forgotten to publish the release notes.

[00:20:32] I think it's happened like twice in my memory. Typically when there's like a lot of different, releases in a day, sometime, you know, there's a lot of clicking.

[00:20:41] Michael Meyers: You mentioned when you, in your example, you said, Oh, I have a moderately critical vulnerability. Is there like a defined set of terms?

[00:20:49] Like, you know, critical, moderate critical, you know, not so critical. And how do you say, how do you determine, you know, what it is?

[00:20:58] Michael Hess: So a while ago on the security team, you could get four security team members in a room, have them look at the same issue and say, what score would you assign this? You know, less critical, moderately critical, critical, highly critical, not a vulnerability.

[00:21:13] And with those four people, you might get five different responses. And the reason for that is because we're all coming at this with slightly different, you know, backgrounds. I run a bunch of education sites. My background is in running education sites. And so when I'm thinking about a security release, I am thinking about me personally and my sites.

[00:21:34] And, you know, everybody on the team comes from different backgrounds. You know, there's a lot of people on the team who do client work and they're thinking about their clients sites. How does this impact that? But that's a really, you know, that doesn't really translate well when we're talking about, you know, how do we communicate this out to third parties?

[00:21:48] You know, if you've got a site that's a brochure site where users are not logging into. You don't really have untrusted user input in a node body, for example, And so a site where someone can put something bad in the body of a node. Okay. Not really a problem for you because you've got eight people can login into your site, they're all employees and they're all relatively trustworthy.

[00:22:10] Whereas if you've got a site where, you know, anyone can create content,, for example, well, that's a whole different issue. And so one of the things we built was this risk calculator tool, which is basically based off of some of the NIST and CTE standards that ask some questions in, generates a risk score for a module.

[00:22:32] And that's what you see on a with the security advisories, it'll have a score out of 25 and it'll have the words, you know, moderately critical, highly critical, critical, and that's, you know, so the idea is that we can get four people in a seat, in a room, have them rate an issue. And we ended up with one score, you know, in the future.

[00:22:53] I would love to expand that out and put personas with it. And so, you know, here are four different types of sites that we commonly see. And for each site here is the risk score associated to it because no, we really don't have one monolithic site. If you've got a site that is entirely providing a rest API to a front end tool, then you've got a whole other layer of security issues that you might be concerned about.

[00:23:18] That me, you know, just having a brochureware site where the Drupal theming layer renders everything. There's no rest and end points enabled are, you know, there's no user interaction. You know, there's two different sets of vulnerabilities. There, obviously there are a common set where, you know, no matter what I'm doing, they all, they all apply.

[00:23:38] But, you know, I, I think that if we expand this out, the evolution of that would be, you know, we're going to give you a risk score by, by kind of, you know, use your site use case like persona almost.

[00:23:51] Michael Meyers: Now you said earlier that you guys are, are mostly or entirely, reactive in that, you know, you're not proactively looking for security vulnerabilities.

[00:24:00] If someone reports a problem, the team will investigate it. So do you guys have a role in like a Drupal core release process or because of the fact that it's, you know, reactive, you know, like explain that to me, what's your, you know,

[00:24:17] Michael Hess: So let's take core, let's put core off the side for one second. We are responsive to reporters telling us about security issues. We have not found and we have looked and if anybody has ideas, please reach out to me. For some type of good proactive scanning of sites, you know, one of the largest things we get are false, positive reports that are generated by automated scanners. You know, there is some work by, by, by a couple members of the community to do a Drupal specific, site scanner where, you know, it is familiar with Drupal and it's looking at, you know, it's a static code analysis tool.

[00:25:02] It's looking at the source code. Sometimes it's running it and testing, you know, if I put this in this variable, where can it come out on the other end? but it's a really hard space to proactively look at security vulnerabilities in, And there's no, you know, there, this is, this is not just us, by the way, this is every open source project.

[00:25:24] You know, one of the things that we haven't discussed yet is, you know, one of the, one of the scariest things with, with the new world we're in is the dependency chain. And you know, it's not just Drupal's code, it's the entire ecosystem. And every dependency that we pull in, and then of course, every dependencies, dependencies dependency that we're pulling in, and to proactively try to review all that code is not, is not really directly possible.

[00:25:51] So, you know, we do need better automated tooling around this. I don't know what that looks like yet. But you know, the security team since the beginning has been very reactive, you know, has been a reactive group. I don't like that necessarily. I would love us to be more proactive, but I think the space in which we are proactive now isn't necessarily in code scanning. It's an education. You know, there isn't a Drupal event that we don't, you know, that is not, there's an, a DrupalCon at least where we don't have at least one security focused presentation. You know, it is much easier to help developers be aware of security issues and be trained on security issues than it is to pay developers to go and review code line by line, trying to find security issues.

[00:26:44] Michael Meyers: I think I should have said this at the start, you know, you and the team are amazing, like, you know, the work that you do, the fact that it's been part of Drupal for so long, you know, the effort that people put in the coverage that we get, you know, I really do think, you know, Drupal in many ways is a model for many open source projects.

[00:27:05] And I think the security team in particular is something that we do really well. you know, so I, you know, you guys should be really proud of, of what you've achieved and what you've done, and it is, you know, it is not an easy problem for, for all of the reasons that you mentioned, and it is impossible in the current world and the circumstances to be proactive.

[00:27:26] You know, it's just not going to happen.

[00:27:30] Michael Hess: You know, what we have done, which is unique among content management software frameworks is we cover our plugins, our modules, our ecosystem, you know, most of the major content management platforms, security teams cover their core products. We issue security advisories for modules that have opted into that process.

[00:27:52] And so our, our workspace is significantly larger. You know, I know when, you know, for awhile, we were talking with a few vendors on, you know, replacing with a vendor supported tool, and that might or might have not come with vendor supported, you know, people to do some of the initial triage, which would have been awesome.

[00:28:14] And then I realize the scope that we cover and they're slowly backed away. You know, when we've run bug bounties, you know, we, we, we cover a much larger scope than most all the other open source projects that are running bug bounties. You know, what's nice about Drupal is we have, you know, great API APIs and when our APIs are used correctly, they prevent a large number of security issues.

[00:28:43] I remember when we had cross-site scripting issues everywhere, and then we, you know, turned on Twig and we moved over to Twig and Oh, most of our cross-site scripting issues are no longer a problem. And sure, occasionally you'll have a developer who passes things to raw and, you know, escapes the, the Twig escaping of cross-site scripting issues.

[00:29:02] But that's an education thing. That, you know, and it happens and, you know, that's actually one of the things that's easy to automatically scan for, but in some instances that's an acceptable use. And so, you know, let's, let's actually take that back to the earlier question. You know, if we were to try to scan for, you know, you could say, well, let's find everything in a Twig template, that's going through the raw filter, and bypassing the auto escaping tool.

[00:29:29] There's a use case for when that should be allowed. Yeah. And so how do, how do you program, you know, if I sit at a security report of every time raw is used, well, then someone's got to track down and find all the code and be like, okay, 95% of these are valid use cases and that one, you know, that one over there, it requires an admin only permission to do it, which we won't issue at a security advisory for anyhow.

[00:29:54] That's why automated code scanning is really difficult. because there's, you know, you have to have the mental model to kind of, sorry, I derailed the question a little bit.

[00:30:04] Michael Meyers: So another thing that's fascinated me for a very long time, you just mentioned, you know, the security team doesn't just cover core.

[00:30:13] It also covers contrib. And when you're browsing contrib modules, you'll see, you know, this module isn't covered by the security team. How, like, you said something about a checkbox. Is that something that people opt in to do coverage? Is that something the security team decides we're not going to cover this.

[00:30:30] How does that, how does contrib coverage get determined?

[00:30:34] Michael Hess: So maintainers are given the permission to opt in their modules. And at the moment that is based off of a fairly lengthy process, which is the project application queue process. There's been an initiative within the security team to replace that process with a better mechanism.

[00:30:53] That process is slow, slow, and that it takes a lot of people, a lot of time to get through it. It requires the work of volunteers to do reviews for other modules. It's a very effective process and that it's really good at education, but it is a very frustrating process. and so what, what we, so once you're through that process, you can opt in any module you want to the security team workflow, but it is still your choice. And why you may have that privilege. You may say, you know, this module really shouldn't be used in production. I don't want security team coverage for it. You know, I'm okay with, you know, the issues here being put in a public queue because you shouldn't be running my module that, you know, prints all the available variables, a lot of production website, like it itself as a security module is a security risk.

[00:31:49] You'll have folks that say, you know what, this module isn't really ready for prime time. Yes, it's getting used, but I, you know, I'm not done with it. I don't want, you know, there's a warning that comes with us. you know, every now then you'll see people who will go into a module's queue and say, Hey, please opt this into coverage.

[00:32:08] We really want to use it on our site. And, you know, we really like to use modules that are only covered by the security team on our site. So we get proactive notification or not proactive, but notification about security issues in advance. Yeah, not advance. I'm sorry. I have not had my coffee this morning and this is obvious.

[00:32:27] I apologize. I should have had a cup of coffee first. No one gets notification in advance. Let me be very clear on that. but you get notification when there's a release published. If you're dealing with a module that isn't covered by the security team, then you know, the module maintainer can, you know, fix a critical security bug at two in the morning on a Friday night.

[00:32:52] And you, you know, maybe you'll get the RSS notification. What the security team process gets you is that security advisory that comes out on Wednesdays and you're aware, you know what that window is. And you can have your staff available on Wednesdays to address that concern.

[00:33:09] Michael Meyers: So Michael, you've been, uh, unbelievably generous with your time today. We went, we went way over. Uh, I, you know, there's so much more, I want to ask you. This is, I mean, I guess for a guy who's been involved in the community for a really long time, you gave me insight and answers to things that I've been wondering about for a really long time.

[00:33:24] So I hope that other people learned, uh, as well, as much as I did. Um, I'd love to have you back in the future because there's so many more things I wanted to cover. Uh, but this was, this was great. Really appreciate you joining us.

[00:33:37] Michael Hess: Thank you for having me. Have a great evening and a good weekend, even though it's a Friday. And I don't know what, when people are going to be watching this, but have a good evening.

[00:33:46] Michael Meyers: Thanks, Michael. Please remember to upvote, subscribe and share this out. You can check out our past talks at You can check out the end of life presentation. I mentioned at [00:34:00] for end of life. As always, we'd love your feedback and suggestions. Let us know what you thought of the show, uh, topic ideas for the future.

[00:34:07] You can reach us at Thank you guys so much for tuning in and joining us today. We'll see you soon. Take care.