This is an edited transcript. For the blog post and video, see Shifting from FID to INP: Google’s New Metric for Improving Web Performance.


Mariano: [00:00:00] Hello, and welcome to Tag1TeamTalks, the podcast and vlog Tag1 Consulting. On today's show, we're going to talk about INP, a new metric for interactivity and responsiveness from the Google Chrome team, part of the core web vitals. I'm Mariano Crivello and I'm based out of Koloa, Hawaii. Tag1 is the number two all time contributor to Drupal.

Mariano: We build large scale applications with Drupal as well as many other technologies for global 500s and organizations in every sector, including Google, the New York Times, the European Union, the University of Michigan, and the Linux Foundation, to name a few. Today, I'm joined by Adam Silverstein, Developer Relations Engineer at Google, and Janez Urevc, Strategic Growth and Innovation Manager here at Tag1.

Mariano: Welcome, gentlemen. So today Adam, I know you have a presentation for us. Real quickly before you dive in, what is INP? And then let's go ahead and jump right into the slide

Mariano: deck that you have for us.

Adam: Sure. Yeah, I'll bring up my slide deck. INP is a new metric that Google is introducing this year as [00:01:00] part of the core Web vitals.

Adam: So the core Web vitals are A series of metrics, a group of metrics that Google introduced a few years ago to measure how users experience the web. So it's a little different than the traditional way that we think about performance, which in general was like a raw metric for how quickly did things load.

Adam: Core Web Vitals looks at, what is the experience the users are having? How quickly do things load? But also, how stable is the web page once it loads? And also, how responsive is it to user input? When you try to do something on the web, on a website, is it, does it react quickly? That's what this new metric is about.

Adam: And we're going to get into it, into great detail about exactly what it measures in about 10 minutes here. My talk is going to be pretty short, but hopefully I'll address all the questions about what it is. And why we're introducing it, more importantly. So what is interaction to Next Paint It's part of the Core Web Vitals, like I said. And it's aiming to measure how responsive the page is to user input. You can imagine a user coming to your page and they're opening a [00:02:00] calendar pop up. Or maybe an accordion on the page. Or maybe there's a slider and they're trying to go to the next slider.

Adam: And I think we've all had this experience where you click on the button and nothing happens. Then you click again, maybe a third time. And then the thing pops open and pops closed because you've actually initiated the event twice. And so this is what INP is all about. It's about identifying these poor experiences that users are having on your website by tracking the entire life cycle of the page and then reporting on what the worst interactions are.

Adam: What is a good I N P score? What is having good I N P means? Because we talk a lot about like having good or passing or not passing. It's considered 200 milliseconds or less. So if you get response, some sort of visual response to user input with 200 milliseconds, that's good. If it's over 500 milliseconds, that's considered poor.

Adam: And everything in between is considered needs improvement. And when we're talking about INP metrics and interactivity in general, the focus is on mobile because on desktop people tend to have higher power devices that, that respond very well under all conditions. And pretty much everyone passes when we look at the mobile data.

Adam: set. I'm at the desktop data set. On the mobile side, there's a great deal of variance in the power of the devices, the how much memory they have, the network condition. So you're way more likely to have these interaction problems, and that's what we see in the data. More INP Problems for mobile. So you're looking at this is really a mobile issue.

Adam: That's where you need to focus. And then, finally, we talk about the 75th percentile, which means that we want 75 percent of users visiting the site. To have this good experience. So users are going to have a range of experience depending on the conditions, depending on their devices. So we're look, we aim for the 75th percentile, meaning most users are going to get this good experience.

Adam: Some will have worse. Some will have way better. Oh, there it goes. this little video, I'll play it one more time, shows you what that bad experience is where you're like trying to expand something and clicking and it opens and then it closes and we all know it's frustrating, but more than frustrating.

Adam: It directly leads to poor business metrics, right? You're trying to, [00:04:00] usually websites have some business goal where you're trying to get people to sign up for your newsletter or buy your product or become a member. And all those things require happy users that are on smooth paths. As soon as they start to have a frustrating experience on your site, they're going to drop off.

Adam: And a key insight for improving responsiveness is that users spend over 90 percent of their time once the page has loaded. So users spend most of their time on web pages after that initial load phase that we've been so focused on in the performance world. It's great that the page load fast and become stable and interactive quickly, but then there's all this time that users spend on your website.

Adam: And you want them to have a good experience there too. And this metric is actually much better at capturing poor experiences along that whole life cycle of the page. And part of that is because the previous metric, FID. Really only measured the initial interaction. And what we see, at least in the WordPress world, sorry, my slides are WordPress oriented because that's what I prepared this for, but this is also going to be true in the Drupal world, is that on desktop, everyone already passes this metric, and on mobile, actually, the FID metric.

Adam: Everyone's already passing this metric. Yay for us, we have great Metrics, but we know this isn't the case. Like we know there's still poor experiences out there on the web that people are having. And the reason this metric is passing is that it's again, only measuring that first interaction. And also we're going to dig in a little bit more about exactly what it measures, but just to understand that since FID passes, since first input delay, passes for everyone.

Adam: It really isn't a useful metric. It's not capturing the poor experiences that people are having. It doesn't give us any information about how we can improve our sites. So that brings us to INP, which is the new metric that's going to be replacing FID in the Core Web Vitals in March. And on the WordPress side, we see that only about 70 percent of sites on mobile are passing this metric.

Adam: So there's definitely room to improve. So the big difference here again is that it's going to measure [00:06:00] all the interactions for the page. That's the main reason that this changes. But there's also a subtle difference in terms of what exactly we're measuring between the two metrics. That's important to understand.

Adam: So I'm going to go through that. So when we look at an interaction on the web there's the user does some sort of action. They click where they touch. And then the browser has to respond. So there's some delay that happens before even the input handlers can fire. That's that initial input delay.

Adam: Then there's the processing time of all of the different. Javascript event handlers that could be multiple for one event, right? Maybe you have a click handler that opens the pop up, but then there's also another one that's firing off an analytics thing and another one that's looking up some information.

Adam: So there could be multiple events tied to one user interaction. And then you've got the part where the browser itself has to render the results, and that can take line time if you have a overly complex DOM or CSS, you can actually get delays at that part. And then finally, the user is presented with something, whatever it is, the calendar pops [00:07:00] up, or maybe if it's a long interaction, there's a spinner going, but the user gets some feedback that their action has been received by the browser.

Adam: That's that final frame presented, and that's the entirety of an interaction. When we look at first input delay, it really only measured this tiny little part at the beginning, the time between when the user Clicked or interacted. And then when the event handler started firing, that was the time we're measuring.

Adam: And that time can be lengthened or, can get extended. If you have a lot of JavaScript operating on the page or the device is not capable. It doesn't have enough power to process everything all at once. So there can be a delay there. But more importantly, is that I N P measures the entire cycle of the interaction.

Adam: So it's measuring everything from when you click all the way to some visual presentation to the user. So again, it doesn't mean the whole process is completed. It could be a long process that's sending off data to the server, but there's some indication a spinner opening. Or let's say you're filtering a search results, right?

Adam: You type in some letters [00:08:00] and it doesn't just go blank. There's some indication that there's a loading process happening. That's fine. That means the interaction is completed. All you need to do is give users visual feedback. What's bad is when you click and there's no feedback, there's nothing happening.

Adam: And so then you're clicking again, trying to get the thing to happen. So that's what I. M. P. Measures this entire cycle, and it also measures it for all of the interactions on a page. So the entire life cycle of the page, even a minute into the page. If the user is still browsing around, that interaction will be recorded.

Adam: And then I. M. P. Report reports the worst interaction. So it's gonna find Those poor interactions that you have on your website that we know are out there. , I wanna take a little side detour here and I'm gonna show some stuff from CWV Tech Report from my previous slide.

Adam: Yeah. So just to summarize, we've got these three parts of the interaction. The first part, which is the input delay, which is what FID measured. Then you've got the processing time, which You know I mentioned like the input delay could be slow by a lot of stuff happening on the in the browser, the same thing with these [00:09:00] processing parts where you could have a lot of different events that are kicked off by that user input.

Adam: And then all of those have to complete before the presentation delay before the presentation happens at the end. And a key insight here is that a lot of JavaScript can be poorly written and not yield the main threat. So this is where we can get into how to improve this is if your events, the processing time, if those events are written well so that they yield back to the main thread, then the browser can much more quickly get to that presentation.

Adam: The presentation delay part primarily happens when you have an overly complex DOM. So if you have an incredibly complex DOM with thousands and thousands of elements or same foot CSS, the browser has to recalculate everything whenever it makes a change to the page. So this is where that part can get.

Adam: Slow down. I'm going to just switch tabs briefly here. Tell me if this works. Are you guys seeing the Core Web Vitals screen now? Yep. Okay, cool. So this was in my previous slide. And what I'm showing here is this CWV tech dot report. This [00:10:00] is a public resource. You can go check it out yourself.

Adam: And in this dropdown here, I've selected Drupal as my technology. So what we're seeing here for is the last year of how Drupal websites as a whole are passing. Core Web Vitals and we see that pretty darn good at the end of last year Google, Drupal as a whole had a pass rate of almost 50%, which is really good.

Adam: What I wanted to point out actually is that we have two options here. Oh, I had the 2024 on, but in this little dropdown, you can actually toggle between how the Core Web Vitals metrics look now, the current version, and how they're going to look when we get to the 2024 metrics where. Fit is replaced by I N P.

Adam: And so it is a little bit lower. So this is the 2023 metrics is 54%. We said it was about 50 percent for 2024. However, if we go to the settings page and we limit this report to just the top head of the web. The enterprise type sites that Tag1 is likely building. [00:11:00] So we're going into this and we're choosing top 10, 000.

Adam: So this is a rough metric of these are the top by traffic, or by navigations. We call them on our side, but basically page views, right? These are the top sites on the web, and once we apply that filter, we go back to the technology comparison screen. And just to show you when this loads.

Adam: We're talking right now in Drupal of about a hundred origins that are in this data set in that top 10, 000. Now, not every Drupal site is recognized by the underlying Wapalyzer technology. So there are probably more, but there's still a significant number that we see that are in the top 10, 000. Or like a headless site, for example, might not show up as a Drupal site because it doesn't have any of the traces of what a normal Drupal site might look like.

Adam: But in any case, what we see. At the head is that, you have 58 percent passing better pass rate, even than regular Drupal sites, which makes sense because at enterprise level, you're focused on performance. But if we go to the 2024 metrics, you're going to see a sad thing, which is the pass rate drops down to below 30%.[00:12:00]

Adam: And so it's actually more of an impact for the enterprise level of sites. And why is that? We don't have actually firm data on this, but my intuition tells me that enterprise sites, first of all, tend to have a lot more interactivity in the first place, things that you might interact with on the page.

Adam: Secondly, they tend to have a lot more. Advertising analytics, third party pixels that are tracking interactions, all kinds of interactive scripts that are put on the page. Those third party scripts typically when I say third party, I mean from other sites, other services that you've embedded into your site.

Adam: Those tend to be problematic or cause problems. They tend to be heavy in their JavaScript. They're focused on their own needs and not necessarily the needs of the user. And if you have enough of those loaded up you can start to have these poor interactions. And so that's why my intuition tells me we see more of it at the head.

Adam: And I do want to say about this. Poor score. It first, it sounds really bad. Okay. Google's changing the scoring. They're like changing the rules. And now suddenly our scores are bad. So I just want to point out that [00:13:00] like nothing is actually changing your website still works the way it did before. And in fact, what these metrics are doing is they're showing you problems that you already have.

Adam: They're shining a light on poor user interactions that are already happening on your website. And they're reflecting that in the tooling and giving you this score, which points you to the direction of, okay, I need to improve something here. And I'm going to go back to my slide deck and talk a little bit more about how you do that.

Adam: Let me just see if I can go back to full screen here.

Mariano: Yeah, I was definitely going to ask about the lower score drop, everybody's always concerned about their scores and their organic ranking how. What is Google's take on this? If we're going to see a change in that metric this coming year is there a little bit of leeway here?

Mariano: How are how is organic search, going to be affected by this? Can you even talk

Adam: about it? So I can't really talk about it because I'm not actually from Google search. I'm from Google Chrome and we're like different. Like pillars of a giant company. So we don't and even like people from search probably wouldn't be able to tell [00:14:00] you exactly.

Adam: I will say that this is a ranking signal core web vitals are ranking signal. So that's been acknowledged by the search team. However, I always think of it as it's something like HTTPS or mobile friendly. It's one of those ranking signals that all other things being equal might elevate your site to a higher position.

Adam: If you have a poor performing site, Google might penalize you for that. However, the main thing is having good content, like from an SEO perspective, at least as I understand it, the important thing is having the content that matches what people are typing in and in the search box. And then these other ranking factors are secondary.

Adam: And it's a little bit of a double edged sword for us working on the Chrome performance team. We want people to be motivated to increase their performance. So we're happy that people think that SEO is, all about having good performance. It's not really but it is actually more about what happens in the funnel once people reach your site, right?

Adam: So you've got this given amount of traffic and then we always use this funnel analogy of okay, a smaller number of people go on to read the second page and then an even smaller number of people actually sign up for your, whatever [00:15:00] your service. And it's at that final point that the core web vitals actually are significant, right?

Adam: If people have a poor experience, if the page takes six seconds to load, we know they abandon it. And there have been all these studies that even just like 100 millisecond improvement improve your business metrics by X percent, right? So we know that as users have a better experience, they're more likely to complete the goals that you've set out for your visitors, whatever those might be.

Adam: And so therefore, like the focus really is on what happens after people visit your site, not trying to get more people to visit your site with better rankings. Hopefully I answered that question slightly, but the answer is I really can't answer that. And that's not really what we should focus on. However, if it motivates you, go for it.

Adam: Yeah.

Janez: Do you have a question, Janez? Do you see the same pattern with top websites performing worse with INP generally for the entire Internet? Or is this unique to

Drupal?

Adam: Yes, we definitely see more [00:16:00] INP problems at the head. Which is, it's a bit of a challenge in the sense that like what my team has been doing is trying to improve performance at scale.

Adam: So improving like the core platform of Drupal or improving the core platform of WordPress, that's not really the problems we're talking about here. This is not a core platform problem. This is a problem of too many things happening on these overly complex sites. And also developers not really having the right tools to identify.

Adam: What those problems are, or even that there are problems, because when you try to test interactions on your own computer on your development computer, it's very difficult to find these problematic interactions. You pretty much have to luck and into finding them. That's where we get to collecting real user metrics.

Adam: And that's what I'm going to talk about on my next slide. So how do you measure IMP? So I've shown you already the CWB tech report where we see IMP data generally, but that same data from it's from the crux user experience report, the chrome user experience ports, the crux data set, as we call it that same data.

Adam: Funnels into PageSpeed Insights reports. When you run those on your site, you'll see the how users are [00:17:00] experiencing your site section that's from that data set. You'll also see it in Search Console. If you are signed up for Search Console, you'll get those reports. Services like WebPagetest also leverage this data.

Adam: You can also collect your own real user metrics. So these are again, metrics you're collecting from users as they visit your site. And this is where you're going to capture those poor interactions. There's open source tools like Faro, and then there's these third party services. I've got a few listed up here, SpeedCurb, RevinVision, Datadog.

Adam: Those are all RUM providers that will be supporting or already supporting INP in their stacks. And then finally, you can roll your own solution where you use the Web Vitals JS JavaScript library to collect these metrics and then send them off to some data store like Analytics or some other place that you want.

Adam: where you can then write queries and analyze them for yourself and really dig into the details. But these reports will highlight, especially the run providers, will highlight specific problems that you're having. So you'll see that a specific interaction, this element, when it's interacted with, Causes poor [00:18:00] INP.

Adam: Users are having trouble opening this calendar on your page. And once you are, once you have that information, once you know where the poor interactions are, then you can go back into the lab using dev tools, using the. Web vitals, extension and using throttling. So you turn on the throttling so that your computer's emulating like a, slower CPU and a slower network connection, and then you try to reproduce you.

Adam: You go through that process and you are you do the interaction. And you can actually capture, okay, there is a there's some slowness here. Something's causing the browser to be non responsive at this point. What is that? So that's where you get to the next step of how you debug it, right? So you've got it.

Adam: You've identified the problem, you reproduce it in dev tools, and now you're gonna try to test hypotheses for how to fix the problem. I mentioned already, JavaScript is often the problem. So a simple kind of trial thing that you can do is just use chrome dev tools blocking to block specific scripts. Of course, it'll break your page, but you can still try the interaction hopefully and see if [00:19:00] it helped.

Adam: And maybe you'll identify one particular script that's problematic. The second thing that you can do is you can use the overrides in DevTools. So you can like strip out your CSS or you can make JavaScript yield correctly or return earlier. So you can fiddle with things in DevTools and try to Solve the problem.

Adam: Hopefully you identify something that's, definitely the problem. You figure out a way to improve it and you deploy those changes up to your site. There's a lot of links on these slides, so I'll share those at the end. Hopefully we can get those to everyone. There's great articles on how to use dev tools to debug.

Adam: I. M. P. It's a somewhat new area, right? It's a new metric. But all the tooling is there. We look for long tasks. There's ways that you can emulate what's happening with the real user metrics in the lab, on your computer, solve the problem, deploy the changes, and then you have to wait.

Adam: You have to wait until you get new user metrics to see how users are experiencing your site because INP only identifies the worst problem on the page. As soon as you fix that [00:20:00] bad problem, The next worst problem becomes your worst problem. So you may not have actually solved everything. You may need to iterate again and again.

Adam: And of course, this is always true performance. You need to, it's something you need to maintain. So that's it for INP. That's my overview of what INP is. This is my last slide from my talk that I'm giving later. But that's pretty much it. And I can answer questions you guys might have.

Adam: Yeah,

Mariano: thank you. This is extremely insightful. As somebody who has spent a lot of time working on web core vitals over the years, I can tell you that having a new metric to give us a much better picture into where problems might exist for, a better user experience is a welcome change.

Mariano: Yeah, I definitely have some questions around because this is something that's based on interactivity. Immediately things come to mind like lighthouse scores like light is lighthouse going to be able to check for this because it's going to have to do something that's interactive. Like how are we going to, automate this yeah, [00:21:00] this type of interactive

Adam: event.

Adam: Yeah, lighthouse does. So in the basic lighthouse will no will not be able to but there is a lighthouse mode where it tracks interactions over a whole life cycle. I'm forgetting the name of it now. A couple of the articles that I've linked in there, at least one of them talks about using Puppeteer to script up, especially like once you've identified an IMP problem, then you can write like a scripted, a Puppeteer or Playwright interaction where basically you're reproducing that browser interaction and then you can prevent against regressions.

Adam: I think that's the big thing, right? But you can also, if you know you have key interactions that you want to have be fast, you can write Playwright tests around them with performance. Metrics built in and then make sure that those stay fast. So there are some approaches to that.

Mariano: Yeah. I'd reached out to Nat Catchpole, who's working on our Gander project for Drupal, which is the performance automated testing tool that we're. Implementing in Drupal core. And I asked them, is IMP in the horizon? He definitely said yes. In the next couple of months, we should probably have a working version of this [00:22:00] as part of our our Gander tool.

Mariano: So that's exciting. I think that's going to be interesting to see, where there could be some potential improvements in, in core, maybe talk a little bit about what's going on in WordPress since this has been, replaced, like what, what's happening there and what, if any gains have you all seen?

Adam: Yeah, so my team is focused on content management systems on improving the performance of the web at scale, right? So we're trying to bring the whole web up core web vitals for the whole web. And WordPress is a big focus because it is such a large part of the web. We helped form the performance team over there and in WordPress core.

Adam: And we've been working on what I would call primarily like a lot of low hanging fruit, reducing the number of database queries that happen on the web. On a page load improving caching in all different kinds of scenarios, picking away at time to first bite, really trying to make the pages load faster.

Adam: But we've also been looking at, other improvements that improve core web vitals like a lazy loading images. We added recently. We added fetch [00:23:00] priority, the only, so at a platform level for INP, it's super challenging to figure out, like, how you would help developers. I think the one thing that we're looking at right now is the Speculation Rules API, which is a way of preloading or prefetching Subsequent page loads by using a JSON defined, it's a little different than the old sort of preload prefetch where we would add just JavaScript tags that would indicate URLs with this new speculation rules API, you can actually specify patterns.

Adam: You can specify rules that determine how the preloading or pre rendering happens. Those things do greatly improve INP, right? If you get everything loaded before you even hit the page, then a lot of times things. More smoothly, but there's a trade off there. There's a cost to that.

Adam: I think for this year our focus is going to be more on the enterprise kind of side of things, trying to get enterprises aware of this IMP metric, because it is going to, we're going to see not only are enterprise sites going to be more penalized, but just the ecosystem as a whole is going to suffer on scores from this [00:24:00] change.

Adam: Yeah, so our focus is improving things in core, but also working with the ecosystem. There's some very large players in the ecosystem. Elementor is a great example. They're a page builder plugin for WordPress, but at this point they're like 8 percent of origins on the web. They're so popular.

Adam: And so when you look at tools like that, they were built, that, that tool is amazing. It's been built over the last decade or whatnot. It has a lot of room to improve. And I don't know if that applies to IMP, but for sure, LCP our main focus has been on LCP which is the sort of the loading metric.

Adam: How quickly does the page load? Trying to prior, if it's a big image, trying to make sure that image is prioritized, other things are deprioritized, all that kind of stuff. But our focus is primarily been like on the platform. And now we're shifting to working more at the enterprise level.

Adam: And of course we've worked with Drupal as well.

Mariano: Yeah, I think we got our work cut out for us. Yeah, go ahead. I was just gonna say, I think you know, all of our enterprise customers, the first thing we're gonna do is probably dig right in and look at their IMP scores and see what needs to be improved this [00:25:00] year.

Janez: Yeah, I just wanted to say that in Drupal we have the same.

Janez: The same problems and pretty much similar approach to what Adam described with regards to WordPress. We've been trying to fix similar things in Drupal core, like reducing database queries, especially when we, during. Development of Gander, the performance testing framework that you mentioned for Drupal core.

Janez: Because of that, we started finding these smaller problems that are chipping away from time to first byte. And the same as in WordPress, when it comes to IMP there are not so many obvious problems that we could fix in core. And also, usually these problems don't originate. from core, from things that core delivers, but are added afterwards.

Janez: And that's why we are also working with enterprise [00:26:00] users of Drupal. And one of the things that we are promoting is to. to use Gander on their sites and to test before they even deploy changes. And we are working hard to get IMP into Gander and that would then mean that when you identify a problem and fix it you could have a test that ensures that this same problem doesn't reappear which would be a great thing.

Janez: So Gander is not like It's not a tool that would let you identify problems that you have at the moment because you don't have real user data on it, but it's a tool that lets you ensure that fixes remain in the code base, which is also very important.

Adam: That's great. And just one more point where you say we'll talk to our clients.

Adam: That's actually one of the big things I'm out here talking about IMP for because, if you're, it's good for you to talk to your clients before they get a report from search console that says, Hey, you have all these problems, or maybe they're already getting those reports. This is a problem we want to get ahead [00:27:00] of and really let clients know that you're aware of this new metric, you're actively working on it.

Adam: You're going to need to collect field data to see how it's really working. And you have a process in place for improving it. And then also emphasize this part of this new metric is actually going to help us make your website better. I think that's the key thing to communicate to business leaders is this is actually going to help you get better results from your website, even though initially it's looking like you're being given a minus or some sort of poor grade.

Adam: The point of it is that is actually helping you get a better grade. And I have to say that over and over again. 'cause people get, get sad when they hear their scores going down. That's not what they wanna hear. They wanna hear that it's, going up or staying the same, at least. Yeah. Yeah.

Mariano: And I think, as a developer, I see this as just a much broader lens or a much more detailed lens into where those types of problems exist.

Mariano: Where traditionally we didn't have that visibility before, without writing very sophisticated test suites ourselves. Sorry, Yanis, I think I stepped on you there a little bit. Yeah, I

Janez: just wanted to [00:28:00] say that we should probably try to present this to clients as not degrading your grades.

Janez: But showing what your grades really are, and you weren't aware of that. And it's a lot of people probably won't be really happy with that explanation, but it's essentially what it is.

Mariano: Yeah. Yeah, I think we'll probably be borrowing from some of your slides here, Adam in that explanation. Yeah,

Adam: feel free, and I'm happy to do another presentation if you want to do some sort of client forum thing.

Adam: I did apply to speak at DrupalCon on this very same topic, so we'll see if that happens, but I am just trying to spread the word. That's my big role as a developer relations engineers. I can't fix the problem, but at least I can tell everyone what it is and how they can go about investigating and doing the work themselves.

Janez: And it's not just IMP, in my opinion, it's also like the general performance oriented culture. Yeah, it's something that it's really important that we [00:29:00] promote. Because with that, even if metrics change, if you have performance oriented culture, If you think about performance on a day to day basis, then even when new, another new score will be introduced, you will probably perform fine because you did your work with regards to

Adam: performance.

Adam: Good point.

Adam: Awesome.

Mariano: Thank you for joining us, Adam. Thank you for spreading the word here. I definitely learned a couple of things during this session. I've got a little bit more research to dig into all of the links that you mentioned today. We will definitely put those in the show notes.

Mariano: We'll link that out to everybody. We'll also link to your social profiles if you want us to do that. And if you like this talk, please remember to upvote, subscribe and share it with your friends and colleagues. Check out all of the past talks that Tag1 has our Tag1 team talks tag1. com slash TTT.

Mariano: That's three T's for Tag1 team talks. As always, we love your feedback and any topic [00:30:00] suggestions, write to us at TTT at tag1. com. And a big thank you again, Adam for joining us and letting us know about IMP.

Adam: Yeah, glad to do it. Thanks for having me.