This is an edited transcript. For the blog post and video, see: Gander Automated Performance Testing - Video Demo with Catch.


[00:00:05] Mariano Crivello: Welcome to Tag1 Team Talks, the podcast and vlog of Tag1 Consulting. On today's show, we're going to talk again about Gander. Gander is the new automated performance testing framework built by Tag1 Consulting and the Google Chrome team that's now part of Drupal Core. I'm Mariano Crivello, and I'm based out of Kiloa, Hawaii.

[00:00:22] Tag1 is the number two all time contributor to Drupal. We build large scale applications with Drupal, as well as many other technologies. For Global 500s and organizations in every sector, including Google, the New York Times, the European Union, the University of Michigan, and the Linux Foundation, to name a few.

[00:00:39] I'm joined again today by Nat Catchpole, aka Catch, a lead developer at Tag1 based out of the UK. He's one of the most well known contributors to Drupal. Nat is a Core maintainer, release and framework manager, and the performance topic maintainer. He's also the architect of Gander, the tool that we're talking about again today.

[00:00:56] This is part two in a two part series. In this episode, we're going to [00:01:00] dive in a little bit deeper and demo Gander for you all. If you haven't already, make sure to check out part one where we discuss how Gander came to fruition. Let's jump in.

[00:01:08] I know you wanted to do a quick demo for us today to kind of show us through Gander. First, I wanted to just kind of recap a quick history of the project. You know, give us a little bit of background on what Gander is and maybe where it's going here.

[00:01:21] Nat Catchpole: Yeah. So, um,

[00:01:23] Tag1, we were contacted by the Google Chrome team.

[00:01:26] Uh, they wanted to improve Drupal's Core Web Vitals. It started off with some improvements to. Core, mostly like image handling, lazy loading, things like that. And then for the next phase, rather than like specific changes, uh, we, decided that, uh, we'd work on performance testing so that we could have like a good overview of how Core's doing.

[00:01:53] And whether changes, whether it's getting better or worse, basically, which is, it's one of the hardest things [00:02:00] with Core development is monitoring performance over time. It takes a lot of manual work, um, by a small number of people like doing local profiling, running Lighthouse manually, things like that.

[00:02:12] Not many people do it. It's quite time consuming and it's hard to kind of compare like with like across time, you have to make sure that the page you're hitting is in the right kind of state. So we came up with the idea of, of Gander, which is a combination of, uh, PHPUnit tests with, uh, uh, an extra base class and also connecting that to an open telemetry stack using Grafana, Grafana Tempo and Prometheus, and that gives you PHPUnit assertions on the one side.

[00:02:44] And, uh, dashboard with kind of graphs over time on the other. Um, we'll have a look at that in a minute.

[00:02:51] Mariano Crivello: Yeah. And so there's, um, there's really not been this type of testing in the Drupal project outside of maybe, you know, organizations or groups that [00:03:00] have done this on their own personal projects. Uh, this is really kind of a first for

[00:03:04] the Drupal Core project.

[00:03:06] Nat Catchpole: Yeah. We've never, we've never had automated performance testing. We had, um, I actually opened an issue to add it in about 2009, worked on it a little bit, never got anywhere and it just kind of fizzled out a bit. So it's been, it's been good to actually to get it done and, and have them running.

[00:03:23] And, um, we have tests running on Core every six hours on the Core GitLab pipelines. So it's kind of like, even though it's quite still quite early stages, it is running against Core all the time. And you can see the changes on the dashboard and that's like the actual commits you get three times a day.

[00:03:40] You get a test run and you can see what's going on.

[00:03:43] Mariano Crivello: Awesome. Well, why don't we dive right in? I know that you wanted to go ahead and show us, uh, Gander in action. And then we'll, follow up with, how someone like myself can get started using Gander.

[00:03:54] Nat Catchpole: Okay. So this is my, my local Drupal Core development checkout.

[00:03:59] It's [00:04:00] a, it's a clone of Core and I use a DDEV for local development, which not everyone does, but a lot of people do. This currently does not have. Gander installed. So what I'm going to show is installing Gander via DDEV, we'll kick off a test. It takes a while to run. So we'll look at something else while we're doing that and then, and then come back and, and you'll see what happens.

[00:04:23] And then once we've done that, I'll see if I can add some additional test coverage to Core We'll see how, see how that goes. So I'm in the, I'm in the root of the Core checkout. The, the DDEV environment is currently off. Um, and, uh, because it's a DDEV extension, all you have to do to get it is this

[00:04:52] and it should bring it down. That's literally it. And then start my environment up. [00:05:00]

[00:05:01] Mariano Crivello: Now I'm noticing a couple of different technologies here, like Grafana and Prometheus. Maybe explain, I guess, how, how all this kind of works together.

[00:05:12] Nat Catchpole: Yeah. So OpenTelemetry is. It's like a specification for how to record performance traces.

[00:05:21] It's mostly for production monitoring. We're kind of abusing it slightly and using it and in a pipeline environment instead, that many, people might be doing that, but it's more of like a production monitoring stack and all of the bits are pluggable. So there's the open telemetry collector that is just an end point that you send your data through, and then that routes things to whatever open telemetry compatible backends you have. It took us a while to figure out which ones to use, like there's a few different ones. If you'd like, if you just kind of Google open telemetry, you'll find like Jaeger and Prometheus. Um, but [00:06:00] we settled on Grafana with Grafana Tempo and Prometheus as the back end. So Grafana is just like a It's basically a UI for executing queries on data stores and building dashboards with graphs from data stores, pretty much.

[00:06:18] And then Grafana Tempo stores traces. So traces, like you have a single request and you have the different parts of that request with timings and a bit of metadata. Um, and then those, that's like one event with like sub events inside it. And that gets stored as a trace. So it's, it's just a recording of something that happened when you hit a particular page.

[00:06:46] And then Prometheus is a time series database and that stores metrics, which is whatever arbitrary thing, but generally like it could be a count or something usually how long something took. And then that allows you to show things across [00:07:00] time. And they're two different database stores, but they all talk to each other.

[00:07:04] There are like other alternatives out there. Some of them are closed source, some of 'em are open source, some of them in the middle. This was like the most feature filled open source stack that we could find and come up with. Um, and it's been, it's been working pretty well so far. Like it's like all the features that you need are there.

[00:07:23] You just have to kind of put it together. So what I'm gonna do now is go into my environment. And then, have I got it in history? Yes I have, um, I'm gonna run the test, um, for various reasons, you, um, you have to run the test three times, um, to get anything in the graphs, it's because it's taking, it's not just

[00:07:54] taking times, but it's taking kind of like histogram buckets and things like that, so it needs a bit of warm up, so [00:08:00] we'll kick those off, hopefully they'll run, so what I'm doing here is I'm just running one, Method just to cut down on the amount of time that the test run takes.

[00:08:08] That's the, the filter hot. That's just finding one method in this class from, uh, the Umami front page performance test. So that will run. It's just a normal PHPUnit test. It will, it's a functional JavaScript test under the hood, and that will start sending data to Grafana in a matter of minutes, I'm not going to say how many minutes, cause I'm not sure.

[00:08:32] Mariano Crivello: No problem. We will definitely fast forward through that.

[00:08:34] Nat Catchpole: Yeah, we'll leave that, we'll leave that running and we'll come back to that later. Okay. So while, while those tests are running, here's, here's one I made earlier. That, that might be a British TV reference that not everyone will be familiar with. But, um, these are, these are the production Drupal Core performance test results, so this is running on the 11 and 10 Drupal Core branches and [00:09:00] the tests at the moment they run every six hours, um, partly to not destroy the Drupal Association's testing budget, but also because like we don't commit that many patches a day and you should be able to see dramatic differences, um, if they. You know, quickly enough.

[00:09:16] Mariano Crivello: So that's maybe a good question to ask here is, is this something that only runs at like a very specific timeframe or is it like when a pull request has been made? Like how, how has that traditionally kicked off?

[00:09:27] Nat Catchpole: I was thinking, originally I wanted to run it on pull request commits, but because it's a graph, if it was Like, um, the commit frequency to Core is very uneven.

[00:09:41] So you could have like three commits in an hour, nothing for two days over the weekend. So that would make it hard to see kind of actually harder to see changes over time. So there's a, there's a pipeline schedule on GitLab that just, it runs. What it does is it runs each dot [00:10:00] here is actually three tests.

[00:10:01] That's why I can see a couple of points here. So it runs three tests. Every six hours, like all the tests three times, six hours later, it does the same thing and it just runs like that. And that generally like three commits a day is probably like not that far off the average Core commit rate. So there won't be like dozens and dozens of commits between the points.

[00:10:23] But also if there are any commits, we still need a baseline for the next one that comes in. This might change over time. If you're doing it on like a website, you could. Do something completely different to this, but it's how it's set up at the moment. So as you can see, there's a few different tests.

[00:10:39] What we've done is we've got like a completely cold cache where everything like the router, uh, I think even image derivatives, things like that, they have to be rebuilt. The front page after visiting a different page, and then. The front page with all caches warm. So this is actually like the Drupal page cache, you can see it's like 10 minutes [00:11:00] for the page to come back. That's because it's hitting Drupal's internal page cache, which is very, very fast, and then we have similar tests for the, like the node one page, like the first bit of content. And so this gives you, if you look at the green line, this is largest contentful paint.

[00:11:17] The blue line is first contentful paint, the first visible paint from the browser. And then the yellow line is time to first byte, where you don't see two lines. That's because they're the exactly the same time. So if the first contentful paint is also the Largest contentful paint, they just merge into one item in this graph.

[00:11:39] Um, but technically there's two things there. The interesting bit is the traces. So we're going to click through to this one here, the bottom one. So this is a warm node page cache. So not the page cache, but somewhat warmed up location. It takes a little. Bit of time to load [00:12:00] because there's quite a lot going on.

[00:12:02] So this is a trace. So this is what gets stored in, in Tempo. You can see Tempo up here. You can see it's got like your three main, not web vials, but the three main kind of network events. The first byte, the first contentful paint and the last contentful paint. But within the past couple of weeks, we've also added database query logging to this.

[00:12:20] This is, um, at the moment, it's every database. So as you can see, there's quite a lot. Let's click on one and show what's in there.

[00:12:32] So because Core's tests use the database cache, um, the cache gets and sets are also database queries. Um, there's still some work to do to kind of separate those. So it works for Redis and things like that. Um, which shouldn't be too much to add on. So this is. It's an attempt to get the page cache. So you can see the, it's looking for the page, internal page cache ID.

[00:12:57] It's being called from the, like the [00:13:00] cache get multiple methods. And then there is hitting the cache page. And because that's a miss. And it has to do a lot of other work because he didn't get, he didn't get, he didn't hit the page cache. So this is like a routing query and you can see, yeah, it's yeah, like inserting into, into the router cache.

[00:13:20] Um, and then as you go down, I mean, there's like, you can see how much is, how many queries are cache queries. This is why you should use memcache or Redis on a, on a website because really like 90, 90%, maybe more. Um, a cache cruise on a, on a Drupal site. So once you take those out of the database, the database just doesn't have to do that much.

[00:13:46] Uh, is one that's not a cache get, this is the, the key value system. So it's a state API. That's what it's getting. And then you can see, you know, it's the same thing. There's a, there's a query string and there's like the first bit of the back, [00:14:00] the back trace. So there's, there's more things we can add here.

[00:14:03] I'd like to add in the CSS and JavaScript HTTP requests. That's still pending, but that would be a good thing to have images as well. So as well, so you've got to see where. Images are requested, how long they take to load, bit like the, like the DevToolbar on the browser and we'll gradually add more things over time.

[00:14:23] Mariano Crivello: So, yeah, one of the things I noticed there is like we're using first contentful paint, largest contentful paint and first byte as kind of the, I guess the, the front end test we'll call them. Is that using Lighthouse library to do that? How's that working?

[00:14:36] Nat Catchpole: So it's not using Lighthouse. It's I wish it was, that would have been easier.

[00:14:43] So what happens is, uh, the functional JavaScript tests, uh, run via Selenium using like, like Behat Mink integration with PHPUnit. So PHPUnit sends stuff via a couple of libraries to Selenium and that one's Chrome [00:15:00] because it got control of the Chrome browser in the test, we can enable performance logging, and then these are passed out of the performance log, which is like a big JSON mess.

[00:15:13] It is quite messy, I have to say. So it's kind of taking all of the events that come in the JSON log, reading them in, and then we find the event we're looking for, and we take the timings from that and then convert it to OpenTelemetry and send it off. Um, the big thing that you get here that you don't get with like the devel module or browser tools, or like most performance monitoring is you don't get the combination of the front end and back end in one report. Um, so, um, like if we add images and aggregate requests, they'll show up. You'll see, like, as those are happening, like for example, some, Drupal does some things after the end of a request.

[00:15:55] So at the very end here, some of these database queries will be happening [00:16:00] after the page is served, so you'd be able to see. In this chart that some images and CSS are being served before the back end is finished completely running, if it's doing like end of request, like cache sets, Quan runs, that kind of thing.

[00:16:16] Um, so that having that kind of overview is that's what we get from using PHPUnit, as opposed to if we just had a script that was generating Lighthouse output, you could, then you could only do like one or the other at a time.

[00:16:30] Mariano Crivello: Right, and then you have to do some kind of magic to combine them together, whereas here everything is in a nice consolidated report, you know, it, this harkens back to all those years that I used to sit and stare at New Relic for customers and kind of get some of that data.

[00:16:45] And then. Um, I, I would either have backend data or I could go and get front end data, but there was nothing that kind of like consolidated that together where I think this is kind of nice because yes, um, you know, you've laid a good foundation for let's target all the things that [00:17:00] traditionally make a page request slow, um, or some type of object request slow and let's, you know, um, Monitor that.

[00:17:08] Um, you mentioned there's a big large array of things in that JSON blob. I'm assuming that there's probably a bunch of other things that you could append and add to this report that might be meaningful for others.

[00:17:19] Nat Catchpole: Yeah. Um, Ajax request is a big one we haven't done yet. Um, one thing we haven't done yet is it doesn't handle redirects.

[00:17:27] So, so you log in, you've got your form submission and then the subsequent redirect. It will record everything that happens in those two things. So record all of the databases from like, like from submitting the form to the next page loading. But what it doesn't show you is where one request finishes and the next request starts.

[00:17:46] So that, that's something still to do. Um, And that's where, I mean, again, it's like you get a lot of, a lot of flexibility from working with the raw performance log, but you also get situations like that you have to account for, [00:18:00] um, when it's not like..

[00:18:01] Mariano Crivello: Well, I think there's a lot of stuff that. I was going to say, is that something that you feel like could be accomplished in the future?

[00:18:08] Nat Catchpole: Yeah. Yeah. It's, it's, it's just, it's just getting to it over time. It just means you've got like two time to first bite things to do. So you'd have to do like first bite to first bite three. Um, but not, yeah, not that not, not very difficult. Just, just, just something to add in.

[00:18:26] Mariano Crivello: Well, we've got to walk before we run.

[00:18:27] Right.

[00:18:32] Awesome. Well, I think this gives us, you know, obviously a visual component to this, um, as far as like how these tests are run. Um, I think, uh, is there something right now, like in the Drupal. org? Um. Area where I can see these tests in Core, or is this something that's kind of only available to Core developers?

[00:18:51] Nat Catchpole: So this, this dashboard is 100 percent public. If you go to like a grafana. prod. cluster. tag1. io, you can click around and see all of [00:19:00] this. Um, it's, it's only just moved to this infrastructure last week. Um, so if you're doing this at home and you find an issue, let, let us know. But it's, yeah, it's all there. Um, and you, I'm, I'm, I think I'm, I'm not even logged in on this page.

[00:19:15] So this is what you get if you click to it yourself.

[00:19:21] Mariano Crivello: Well, I will make sure to break it, uh, this afternoon.

[00:19:24] Nat Catchpole: That sounds good.

[00:19:27] Mariano Crivello: Um, awesome. So I, I know, uh, you were running a test in the background. We want to jump back to that or do we want to, um, uh, is there anything else here? I guess you want to show us.

[00:19:39] Nat Catchpole: I think that's probably most of what there is on this.

[00:19:43] Um, let's see. Let's see if that test is finished.

[00:19:48] Mariano Crivello: Okay. Um, and if it, if it hasn't finished, I think maybe what we can do is just, uh, dive in and show us a little bit of how one sets up a test and if we were to try to. Uh, mess with it a [00:20:00] little bit and, and modify it for our own. Um, let's, let's definitely get into that.

[00:20:05] Nat Catchpole: Okay. So now we're going to look at adding some test coverage to Drupal Core to hopefully show up in a local Grafana instance. I didn't actually prepare anything in advance. So this is like genuine live coding because of that. I'm going to take a shortcut, which, um, let's be honest, I take this shortcut quite often.

[00:20:27] I'm going to copy one test to a different, different file, change a couple of class names and then use that as a basis. Um, cause why not? Let's call it authenticated.

[00:20:43] All the, all the secrets coming out now.

[00:20:45] Mariano Crivello: I think it's very common for people to copy forward, right?

[00:20:50] Nat Catchpole: Yeah, it's like, why, why, why type when you can do this? All right. So let's get rid of all the things you don't want.[00:21:00]

[00:21:00] Okay. So if you walk through, this is just, you know, what you need. Namespace declaration. Um, I'm using the performance test base class. Um, group open telemetry means that the. Core performance test schedule will run this test. So we run anything in this group. So if you, if something gets added to the open telemetry group, it will run on that six hourly job and show up in the graphs. There's no. That's all you have to do.

[00:21:31] Mariano Crivello: Would this be a place where others would have their own, I guess, group for running their own schedule.

[00:21:36] Nat Catchpole: Yeah, you could, um, yeah, you could just, uh, like you can add an arbitrary name, like my performance tests and run those, and then that's, uh, it's, it means that you, you can just run these tests with PHPUnit, uh, specifying the group.

[00:21:51] And you don't have to do anything else and you don't have to maintain a list anywhere. You just, you just annotate the tests that you want to run. Uh, we're doing, using Umami because it's got [00:22:00] some, it's got some content, makes it a lot easier. What we don't have in Core at the moment is any authenticated user tests.

[00:22:06] So user equals this Drupal create. User, uh, Drupal login user,

[00:22:23] and then we will warm the cache. So one important thing when you write in a performance test is you need to decide what level of cache warming you're going to have. And do that before you collect any performance data. If you don't warm the cache, then you're testing with a, like a completely cold cache and that's fine, that might be what you want, but you like, it's important to decide what you're doing, otherwise you'll get the wrong numbers.

[00:22:53] Here. We want to see how long does it take to serve a page with an authenticated user? When they've already [00:23:00] visited that page and that'll tell us like, is the dynamic page cache working well, is, is BigPipe working well, that kind of thing.

[00:23:07] Um,

[00:23:08] Mariano Crivello: Yeah. So that first request will be obviously much slower and then the subsequent request should be.

[00:23:13] Much more performant. And that's the numbers that we would be looking at over multiple tests or trends.

[00:23:22] Nat Catchpole: So this is where things actually happen. Hopefully I've actually typed it correctly. This is so what I've done here is, uh, okay. So, so all of these bits, the user creation. The login and those two requests to the front page during those, the test won't, won't save or send any performance data to OpenTelemetry.

[00:23:48] Those are pure warmup steps. And that's really important that.It doesn't collect performance data or any steps because, um, it would just cause a mess in the [00:24:00] dashboard and in what it can assert on the things like that. Um,

[00:24:02] Mariano Crivello: You would see a big spike and then it would come back down.

[00:24:05] Nat Catchpole: Yeah, exactly. That's like, we just like, it would essentially, you would not be able to do targeted, like it would look like New Relic, like New Relic gives you every request on your site and then it tries to show you what it thinks is important.

[00:24:19] And this is, so it's like a subtractive. Kind of thing. And this is the opposite. It's like additive. So you're, you're, you're trying to hit very, very specific set of circumstances, reproduce those circumstances and always those over, and then you can monitor over time. And that's like the big difference between doing like essentially performance testing and performance monitoring is the, is this kind of set up and control.

[00:24:44] So when we actually want to collect data, um, we call this collect performance data. And that just takes a closure that does what you would normally do. Um, and the return [00:25:00] value is a value object that has things like the count of database queries and things like that. Um, what we'll do, we can actually make, the test will fail cause I don't know how many database queries there are, but I can do this same, uh, let's do, let's just, let's pretend there was zero database queries and then data.

[00:25:23] Okay.

[00:25:30] So now always best to check that's a good start. All right. Let's see. So now I'm going to do the other tests and finish running. So I'm going to try to run this test now and we'll see what we get.

[00:25:46] All right. Well, let, let one. And then we'll come back and look at that in a minute.

[00:25:50] Mariano Crivello: So, yeah, I noticed that, um, you're running kind of like a, an isolated PHPUnit test in that project. Is that something that like Composer's managing all the dependencies for?

[00:25:59] Nat Catchpole: Yeah, if you, [00:26:00] um, if you have a git clone of Drupal Core, Composer install will install the dev dependencies.

[00:26:06] If you're using, um, Like the Core recommended project, you can add Drupal Core dev as a dependency to your project, and then you get PHPUnit from that. The only difference really is the directory structure. So what I've done here, um, should apply. To any like DDEV plus composer managed Drupal site and the only difference is maybe like one level of directory nesting or something like that.

[00:26:34] Mariano Crivello: Okay. So it's, it's fairly easy to turn this on and implement it, um, in your own project and then do kind of what you just showed us, you know, copy a test and start, um, tinkering around with things.

[00:26:47] Nat Catchpole: Yeah, there's some, there's, if you, if you don't already have functional JavaScript tests, um, it's a little bit fiddly to get functional JavaScript tests running if you haven't done it before.[00:27:00]

[00:27:00] Um, but yeah, um, you might get lucky, but you might not, but there are resources around for running like Core PHPUnit for JavaScript tests. I would recommend, I will probably add a link in the description or something like that, but I'd recommend like getting to that point and then going, going to this. Um, that's like, yeah, if you're doing it the first time, expect hiccups.

[00:27:22] But once you've got to that point, you can run the functional JavaScript tests, then it's literally like one line. You want to test and it all works.

[00:27:29] Mariano Crivello: Right. Yeah. And I know we're, we're working, um, on some documentation here that will, you know, kind of coincide with, with this demo, if you will. Um, and I think we'll probably put out a.

[00:27:40] Full featured a walkthrough of this at some point correlates directly to the documentation. But I think this, this gives us a really good sense of like, okay, this is actually working. It's actually running tests. It's actually in Core today. We can see the data that it's collecting. I heard something earlier this week that we're actually already starting to see some, um, some [00:28:00] performance gains in, in the Core project.

[00:28:01] Do you have any information about that?

[00:28:04] Nat Catchpole: So. I recently started working on the database query logging. Um, and I was looking through which database queries were getting collected on a, on a Drupal login. So like, like literally you go to login page, username and password in a test, save. And then what, what's collected between hitting save and the next page loading.

[00:28:29] Um, and I noticed that Drupal Core checks, um, three times if there's a user with your username with three different database queries, or actually two of the same, one was different, but like one three time, almost the same thing running three time. Um, so that's like, all we needed to do was instead of one same query, three times the query once.

[00:28:55] Load a user if it was there, and then pass that user to the next, that checks if it's [00:29:00] blocked and another thing that checks something else. And that's changed one, three queries down to one. Um, we also found out that should have an impact for every Drupal login from 10. 2 onwards, it's not actually committed that particular patch yet.

[00:29:19] But there was another issue before that. When we check logins, we check, um, Drupal's flood control system. Um, just in case they've had too many failed parts, it's like, you know, you leave maximum five failed password attempt in an hour that's on by default in Drupal Core, the flood table is created on demand.

[00:29:38] Um, but it was only created on demand when you add a record to the table and only happens. If you put the wrong password in. So in Drupal's Core tests, every time we log someone in, like 9. 99 percent of tests, we don't put the wrong password in. We actually log in that user. We're not [00:30:00] testing that the password failed.

[00:30:01] So that table never created, but when the table is not created, there's whole like error system, you know, query runs. Because the table doesn't exist, if there's an exception, and then we try and create the table, then we don't create the table because we didn't create it. So we throw it away. Um, but we check if the table exists to see whether it's an error because the table doesn't exist, or if it's a different error from the database.

[00:30:25] All of that work was happening on every single Drupal login and every Drupal Core test, which is like hundreds of times per test run. Wow. Um, it turned out that was adding about, I don't know, 30 seconds to a Drupal Core test run, like say it was eight and a half minutes. It went down to about eight minutes.

[00:30:43] So something like 10 percent of Core test runs was this was finding out this table was empty, handling the exception, running an information schema query to check if the table was there. And then it's in the exception and all we needed to do is create a table on that first, like whether [00:31:00] we're at it, like on the get, like when we request the, the flood thing, if it's not there, create the table then, and after then it does exist and it just lets you go.

[00:31:08] So it's, so it's like pretty much the Drupal Core because it's quite a lot of work to do performance testing on Drupal Core and it doesn't happen that often. Once you start looking. You find issues and they're not that hard to fix. And then you look and it's like, Oh, and fix it.

[00:31:25] And then, performance improvements can start to get baked in.

[00:31:29] Mariano Crivello: Awesome. So yeah, Gander's only been in the project for a couple of weeks now, officially, and it's already starting to show a promise for, um, optimizing Drupal Core. Um, well, I think that's awesome news. I'm happy to hear that. I've, you know, I've always been a big fan of trying to tackle things early on as possible. Um, I know in a number of major projects where performance was highly critical, you know, staring at New Relic and trying to find, uh, the quote unquote gremlins in the code, um, [00:32:00] was a, was a tough job. And, and a lot of times I think we just put a lot of faith and trust in Drupal Core being as optimal as possible.

[00:32:07] And we were always looking for things like in community projects and themes where there might be some, um, you know, Performance problems maybe let's talk a little bit about that. Like, if I had like a, a Drupal distribution or maybe I had a, a Drupal theme, um, how easy is this to add this to my project so that somebody could turn it on and start monitoring this type of telemetry?

[00:32:28] Nat Catchpole: So on the PHPUnit side, um, you could add this like tomorrow. Um, so you can, you can add to. Uh, functional JavaScript test that extends the performance test base class. And then you can hit a page in the test, um, get that performance data object and check how many database queries were running.

[00:32:47] Um, how many, uh, CSS files were loaded, JavaScript files were loaded, cache gets, cache sets, cache deletes. Um, and that [00:33:00] will all like, and because they're PHPUnit assertion, it's just a PHPUnit test at that point. And that works immediately out of the box with Drupal, Drupal 10. 2. If you want to do the Grafana side, you need to have some hosts, Grafana somewhere.

[00:33:17] Um, you could spin it up, but there is a free tier company behind Grafana that offers free hosts. We have not yet tested whether the Drupal Core like Gander config will just send to that host of Grafana, but I think it might. Um, that's something that we'll hopefully do soon and document how to do. Um, because you're not running production levels of traffic, I think you could use, I think you could use that free, like the amount of data that they store on that free tier would probably serve most projects with, with like a few tests a day, for a very long time.

[00:33:54] Mariano Crivello: I think that would be a great way for people to incentivize them to start using this, even still, [00:34:00] you know, Grafana is, um, fairly easy to set up and, uh, if, if you run a fairly important project in, in Drupal. org, I think getting that up will be worth the while, um, all right, I, I know that we covered kind of the, um, Uh, the details of, you know, how a test is run, uh, you actually created a test live here on the session.

[00:34:21] I appreciate that. There's a live coding is always not a fun thing or can be not a fun thing sometimes, but that actually went fairly well. So Bravo. I know that test just finished. Um, let's take a look at what. What you've got here for those results.

[00:34:36] Nat Catchpole: Okay. So, um, so here you've got the, this is the first, like the existing Core test that ran.

[00:34:41] Um, this is like the warm cache front page. As you can see, it's pretty similar to what the Drupal Core dashboard looks like. This is, this is the public one. Um, the front page warm cache comes in about 10 milliseconds. And then, like, the paints are about [00:35:00] 40 50 milliseconds. So this is what DDEV just installed.

[00:35:04] Um, like, localhost 3000 is the URL we go to. You get the full Grafana stack, the same dashboard. It's all pre configured for you. And as you can see, time to first byte is about 10 seconds. And the paints are about 30 to 50 milliseconds. This is running on my laptop. Zoom's obviously running on my laptop at the same time while we're doing this.

[00:35:26] Like it's in, it's in the rough ballpark and I don't have like, I don't have my hard drive, so I have DDEV, MySQL on a RAM disk. It just runs on like an SSD. So there's gonna be a variation between where you get locally, but you can see it. And when you click through.

[00:35:42] Mariano Crivello: Yeah, I guess it's a good point to bring up here is like, this is running on your own machine, so if you wanted to show any type of trending, you would have to kind of rerun it again with the same environment conditions, right?

[00:35:52] So, um, not saying that you should have zoom running when you're running these types of tests, but, uh, you want to make sure that if you're going to be comparing tests over time [00:36:00] that the environment conditions are as static as possible. Um, but there's always variability there.

[00:36:07] Nat Catchpole: Yeah. I mean, yeah, especially with the timings.

[00:36:09] Um, but you can look at the traces. So if you look at, this is the, the test I wrote on the call, which amazingly ran and seems to have sent some data. Uh, so yeah, there you go. Um, so this is showing you, um, you can see that they are logged in because it's, it's creating a session table, which you don't, we don't create a session table for anonymous users.

[00:36:34] And then it's loading the user here, here it's loading the roles and then various other things less interesting, I wonder what that config is.

[00:36:46] So you see, yep. I don't know what's cool in that, but it's, it's starting a lot of conflict. Maybe this is, maybe this is full of blocks. Once you start looking, you start finding things just, just [00:37:00] because you haven't looked before. And it's like a session twice. Interesting. There you go. So anyway,

[00:37:07] I (Too much crosstalk to parse. Sorry)

[00:37:09] Yeah, but yeah, so that's, that's what you get, like that test.

[00:37:15] I don't know how long that took me to write a test, like a minute or something. And you, you already see different things here from what you see with the anonymous tests. And then if you, if you did a different amount of cache warming, you'd get a whole different set of database queries, cache sets, and things like that to compare as well.

[00:37:29] So in terms of adding test coverage, that's really what there is.

[00:37:32] Mariano Crivello: that's exciting.

[00:37:33] I, it's interesting. Like, you know, you just created a test here and we're already starting to see things that, that could potentially have some, uh, some work put against them.

[00:37:41] And, and we would see performance gains in Drupal Core. So, um, it's, uh, very promising.

[00:37:50] Nat Catchpole: So, so here's the test one that sent that, uh, to open telemetry and you can see it like the, the first assertion. Was, um, I was expecting [00:38:00] zero database queries and it's still got 15. So if I go in and change that to 15, run a test again, it should pass.

[00:38:06] And then it will take me on to the next thing. Um, and this, again, this is, this is what you can do. You don't need any Grafana for this. This just runs standalone with PHPUnit. You can do this on any Drupal contrib module or theme or profile and get some coverage and then add the Grafana monitoring later on.

[00:38:25] Mariano Crivello: Yeah, I could see this being a really big win for, for those Drupal profiles as distributions where, you know, they put in a lot of work into building those profiles in that ecosystem and. You know, now outside of just targeting things in Drupal Core, they can start to optimize their projects. How, how, how do we get started on this?

[00:38:46] Is there a Slack channel that, uh, somebody who's interested in doing this should participate in, or, um, like where's the community page, if you will?

[00:38:57] Nat Catchpole: Um, yeah, so, uh, if you're in Drupal [00:39:00] Slack, I guess it's kind of two channels that you could talk about. It's one is the, the Contribute. Um, so in terms of any like discussion of performance testing of Drupal Core or contrib or profiles, that's a good place to ask questions.

[00:39:13] And I'm in that channel, like every workday and keep a bit of an eye on it. Uh, you can tag me in if you, if you want to talk about this. Um, there's also, if you're, if you're trying to, um, implement it for your own site, there's the Performance channel. That's a lot lower traffic, but that's also a good place to talk about performance testing.

[00:39:31] There are people in there who are interested in performance, it just doesn't get as much chat as the contribute channel.

[00:39:38] Mariano Crivello: I feel like it might get a little bit more noisy. Uh, now that this is out in the world.

[00:39:42] Nat Catchpole: Let's hope so. Yeah. Yeah.

[00:39:45] Mariano Crivello: Um, great. So you've, you've shown us an action, uh, you've shown this in action.

[00:39:51] Um, you have created a test on the fly. I thank you for doing that. And, um, I [00:40:00] guess like what's coming next. I know recently you've added a database queries for the projects, fairly active. What's kind of on the horizon. I know you mentioned Ajax calls. I don't know how close that is to being done, but yeah, give us a, give us a little bit of a roadmap here.

[00:40:14] Nat Catchpole: So, um, so Ajax calls, um. I currently neither in the, the assertions or the Grafana, it'd be good to add those. Scripts and styles in these traces, so, and images and, and Ajax. So all of the, like the front end events that aren't in these like, like major timeline events, but the actual things that loaded on the page, um, that's pretty soon coming up hopefully, but I haven't actually written the Open Telemetry side of it yet.

[00:40:47] Um, but because

[00:40:48] Mariano Crivello: This is like This is like the total number of like CSS scripts or JavaScript counts

[00:40:54] Nat Catchpole: Uh, look, it, we would add the counts probably as well, but it would actually show you [00:41:00] the individual files when they were loaded and how long they took. Um, gotcha. Yeah. Uh, the, the, the advantage of that is, say you have a request

[00:41:09] it's more JavaScript to show you that process as well. Um, but yeah,

[00:41:18] Mariano Crivello: I

[00:41:21] was just going to say, I can see that the benefit of seeing, you know, um, how many requests are being made for CSS. And if it's, you know, a very high count, then maybe we know that maybe aggregation isn't working or some other, you know, performance tuning, um, hasn't been, Set up or optimized, uh, uh, for that particular test scenario.

[00:41:43] So, yeah, I I'm excited to see stuff like that because I think that those are the big things for front end performance that everybody's looking at is like, how do we get, reduce the total number of requests on a, on a single page load. And, um, what is the total, uh, bytes of that particular [00:42:00] request that's being served?

[00:42:01] Um, I think that would be an interesting set of trends as well.

[00:42:05] Nat Catchpole: Yeah. And you also, I mean, you also get with cause aggregation, um, because so much of the CSS and JavaScript is already dynamically, depending on what's on a page, you can have like, um, like you have your aggregate for the front page, but then you go to the node page and it has to recreate a whole different set of aggregates and there are optimizations you can do so that like some files are shared, like some aggregates would be identical between those pages, but then the different things would be different.

[00:42:32] Um, but that's quite hard to do in what we'd like to do that in Drupal Core, but we haven't successfully done that yet. You just have to get lucky. You can do things on a site specific basis to like. To manipulate what ends up in what aggregate so that you have less duplication of, within those files. And this will show you whether aggregates are potentially, whether they're being rebuilt or not, because, uh, uh, [00:43:00] cache hit on it, like an already built aggregate is going to be a lot faster to serve the one that has to be built on the backend.

[00:43:08] And you will see things like that in these, in these tests. Um, be a big one just in terms of like legibility of the reports is instead of showing the database queries for like cache operations, key value, cache tags to separate these into different. Um, types of spans. So, and I think this, this like, Umami front page.

[00:43:32] This doesn't need to be here. This just can be here. So we've got like cache get, cache set, cache delete, like database query. So that kind of tidying up, but also it would mean that you can scroll down here and you can see, Oh, there's a database query as opposed to just another cache set, which you probably, you can optimize your cache gets and sets, but There's not very good returns when trying to do that.

[00:43:58] Um, so it's better to look [00:44:00] for the things that aren't already cached. Um, and it's currently not that easy to do that. You kind of have to look for something that's not a CID in the list. Um, so a bit more, a bit more labeling, a bit more separation. And also by doing that, we'll support memcache and Redis backends for caching. Currently, you wouldn't see those on here. Um, but you will, once that's done and that, so that will help for sites that want to adopt this, they'll be able to see everything that they expect to see.

[00:44:32] Mariano Crivello: Yeah, good point. Um, well, awesome work. I think this is, um, more than just a great foundation. I think you've, this is already showing, as we mentioned before, uh, the fruits of the labor that's been put into this.

[00:44:46] Um, I'm excited for, for Drupal and the Drupal Community that we now have an official testing suite, uh, in Core. Um, this is only going to. Do good things for our future and our [00:45:00] projects that we build in Drupal.

[00:45:01] Thank you, Nathan, for spending the time to kind of walk us through this. Um, I look forward to doing a followup session in the future where we add some of these additional features.

[00:45:10] And we will definitely be joining you, uh, in the Performance Channel to ask questions. I appreciate you offering up. Uh, your time and help there.

[00:45:19] If you didn't catch the first session here, you learn a little bit more about the history of Gander, how we decided we were going to participate and build this project out, all of the links will be posted for this session.

[00:45:30] And if you like this talk, please remember to upvote, subscribe, uh, do all the things that you do, uh, on YouTube, share it. And then we have a whole host of other Tag1 Team Talks. And as always, we'd love your feedback on any topics suggestions. Uh, you can write us at Tag1 Team Talks uh, or at ttt@tag1.com.

[00:45:51] com. Um, and a big thank you to you, Nathan, again, for demoing this out for us and to anyone who tuned in and joined us.