See full event listing

How Markdown Can Stop Hurricanes

The web industry accounts for 2-4% of global carbon emissions. This is roughly equivalent to the airline industry, and probably dwarfs your lifetime pledge to paper straws. As we build content-rich sites with thousands of users / products / posts, it’s our responsibility to deliver value on minimal resources. But how can we take action as sites grow ever-more complex? Let’s see how modern content formats like Markdoc, and server-first frameworks like Astro, can dramatically cut power consumption while bringing the cutting-edge DX you need.

Ben is an open source maintainer, teacher, and whiteboardist making the web a little better. You’ll find him hacking on Astro or sharing bite-sized learnings on wtw.dev.

Transcript

Ben Holmes 0:14
Yes, hello, I hope everyone’s doing well and enjoying the CodeWord Conf. I’ve been trying to tune in, I’ve been seeing a lot of amazing talks out there. So mine is maybe not a topic that you would expect. We’re going to be talking about how markdown can stop hurricanes. You can find me at the homestead, obviously. But we’re gonna get into a lot of details in this talk. So in case you haven’t noticed, I’m sure you have at this point with two hurricanes hitting the coast of the US in the last month, I think, including my hometown, the Earth’s getting a bit toasty, it’s getting a little bit toasty. It’s been that way for a while. And if you read across articles or industry news, the web contributes maybe 2% of global emissions not to count all the IT infrastructure that’s involved in hosting the websites that we use, which can get up to 4%. Even. So the web is a pretty sizable chunk of all of the emissions that we have running around, because it’s not free to host things. And we’re all content authors here. That’s why we joined this conference. So I wanted to talk about what do we contribute to, you know, co2 emissions in the environment? And how could we improve in the websites that we build the host that we choose all the decisions we make in our daily work? So the goals of this are to explore energy efficient options for hosting, for developing and building your website, and also debugging your website when things go wrong, which is kind of an unsung hero in the development process. And I’m hoping we can find some wins here. That won’t sacrifice on the usability or developer experience that we use every day. Because say, when you’re building a side project, and you probably are emitting, like point oh, one grams of co2 for it before it scales, hey, we don’t want to force you to use all of these vegan diet tools, we want to give you the form meat while still finding more energy efficient games.

Ben Holmes 2:09
And I’m not going to pretend that all of this is on us on the developer, you may have heard the term greenwashing before misleading publicity, you know, watching an Exxon Mobil commercial and feeling like you’re the problem with climate change, not really the case, you know, companies have the largest impact, because they’re doing everything at scale much more than the consumer or the developer. So on that topic, let’s talk about hosting, which I think is one of the you know, largest wins that you can think about when you’re deploying your website, because most of the energy, when you’re deploying a website and displaying it to users, that energy is spent computing whatever resources you need, and sending it down the wire. And there’s many factors that affect that relationship, like how are the host servers actually power? Are they coal powered? Are they giving back? And, you know, planting trees to offset their carbon emissions? What are the hosts doing? And also, what is the distance from your content to your user? It’s, you know, the actual internet, the infrastructure, that’s going to add more co2 emissions. So you got to consider that distance factor as well. And how often are you re computing that content? Is it purely static content? Can it be cached? Or is it 100%? On Demand? And are you limiting? How much is 100%? On Demand? And when you think about that, you know, you can categorize it as us and them, right. So the host servers and what they’re powered by that is on the host in order to figure out and the distance from the content to the user that’s on you to figure out and when you’re choosing how you deploy to that host servers, and choosing between static cache and whatever else. That’s a combination of you, and the framework and the host, all of those working together.

Ben Holmes 3:53
So first off talking about knowing what servers they use them problem, you probably hear about this from time to time, as you know, is this host, a green host? Is it something that you can properly deploy to, and it’s really difficult to get this information. I wish it was easier. But whenever you Google around, you don’t see a lot of results. The main database that I would point to is the green Web Foundation Directory, the green one foundation is certified to actually investigate how servers are run. And if a company you know reaches out and get certified, they can confirm to you in their directory, whether that server that you’re deploying to is green or you know, carbon neutral maybe. And Cloudflare is a great example. In this. It was the easiest for me to find information on them, where they have some announcement posts, which I have in the footer here. You could check out the slides afterwards. But you have worker cron jobs and they can be switched to sustainable hosts. So if you’re running nightly tasks, you know on a schedule, you can choose to run it on a sustainable house. It’s almost a no brainer, and also Cloudflare pages which you can think of like Netlify or Vercel, those are also running on 100% renewable energy. So if you’re thinking about a host, I mean, not sponsored at all, I just realized Cloudflare is a really good option here. And I’d like to see some other companies follow suit on transparency of where things are deployed, you can also find information for AWS, I know that AWS West is on a sustainable host, according to green Web Foundation. And also Google Cloud Platform has some options in their console, if you deploy to them.

Ben Holmes 5:30
Now, you know, if you choose that host, if you can choose that sustainable option. Now it’s on you to figure out, you know, how do you deploy it to that host effectively. So let’s talk about you know, the progression of Jas hype that’s been going on. So this is kind of how I felt following web dev from the start of when I started doing this in 2015. Up until now, at first you were learning how to deploy a server to Heroku, then you were learning how to deploy serverless to AWS lambda, and then are trying to figure out what the edge runtime is. Second, you’re probably again doing on demand requests, deploying an express server. And then you learned about caching practices and learned about jam stack options, that’ll generate static assets. And the biggest one, we started with client side spas if you were in the JavaScript world. And now we’re learning about all these server templating options. But I would argue we should have learned all these in reverse, they probably shouldn’t have been from heaviest to lightest. Instead, the ideal project lifecycle should probably start with the smallest compute runtime that it can. So in a perfect world, we would start with edge compute, we would graduate to serverless, when we have the need, and then go to full blown servers, if stateless isn’t an option for our app. And also, we would start with pre rendering and caching our assets. And always switch over to on demand endpoints when necessary. And the one you might expect me being an employee of Astro starting with server templating. And reaching for client side bundles only when interactivity is necessary. This is a really nice progression of opt in to more compute, and more energy used as you’re building your website. And yeah, this takes us into sort of the framework problem, where it’s a combination of us building our product, and then maintaining a framework that lets us do all of these amazing things without us having to think about it too hard. So first off, I do want to shout out some of the stuff that Vercel’s umbrella is doing Next.js and SvelteKit, they have some pretty magical runtimes these days, that will choose whether to deploy as a static asset or dynamically based on how you use the framework. So if you’re just writing some HTML, it’s going to cache that and send it a static assets. But if you start reaching for the response object, you reach for browser cookies, then it sort of switches, it flips a switch internally, in order to change that into a dynamic server request. You don’t even have to know what you’re doing. Well, you do have to know what you’re doing, because it’s definitely trusting you. But you don’t have to worry about flipping that switch on your own. And if you want a manual opt in, they also have per route runtime options. There’s export const runtime on Next.js layouts, Next.js pages to let you say I want to deploy to the edge. And I would argue that edge should be a default as soon as it is like a stable runtime that people understand right now. No, Jas is the default. But I could see that switching at some point in the future. I also have a little example here of fetch calls. Next, where you can add a revalidate flag by default, it’ll cache your fetches. And then you can add in more sauce if you need to. You can also use Astro to pump up static assets, there’s a million options to go from static to dynamic with varying degrees of magic.

Ben Holmes 8:42
So now let’s talk about what I’ve been hinting at a little bit. The opt in mindset, this is something that I’ve given a talk on a few times before. And I’m going to run through some classic slides again, that you may have seen before, at React Rally, perhaps. So a classic example that I like to start with is building a create react app. This is where I started learning about spas. And it may be where you started learning about them as well. So you start your project with the Hello World, you know, you got your React, your React Dom, React Router style components, the base bundle size that you learned about on a dev dot two blog posts, and you throw it all together, you ended up with 66 kilobytes in the payload. And then as you start progressing through your app, you realize, oh, I need a signup form here. Maybe I’m building an Ecommerce site and I want to add a newsletter functionality. You might add in, you know, event dot prevent default for MC to validate the form and have some nice client side updates while you’re typing. And that bumps up the client Yes, a little bit more. But again, you’re still following the rails, you know what you’re doing. And then you add in your cart bubble, you know, a little flyout that tells you all of the items that are in your cart. And for that now you need some global storage needs some client storage for server states, maybe you even have Redux to duplicate your server and clients all of those best practices. As, of course, now we’re getting heavier and heavier, until you hit that performance audit online house. And you’re wondering why your performance scores of 39, because you were following all the steps you were doing with the community was talking about, but you ended up as a result that was heavier than it needed to be, because maybe you just weren’t sure of the tools available, or you’re just following the guidelines to build an interactive app, when you may not need that level of an interactive map. And there’s a few pains to this approach, right? You know, we’re doing opt out design with this approach.

Ben Holmes 10:32
So with a create React app, you need to eject if you ever want to tweak your Webpack config, which is quite difficult, styled components will ship JavaScript with your styles by default, and it is difficult to opt out of that. And react context, of course, is storing state higher up which can trigger many a rerender. And opting out with us memo is something that you don’t really think about until you get to the performance problems later, it’s inefficiencies waiting to happen. So now let’s flip that to an opt in mindset using something like Astro perhaps. So speaking to the topic of the talk, how markdown will stop hurricanes, well, let’s build an index.md file. That’s our hello world. And that’s gonna ship zero kilobytes and client J S, we’re rendering our markdown on the server, and we’re sending it to you with information about our website. And then we have that same signup form. In this case, you know, the web has had post requests for a million years. So maybe you just add a form and use odd for some client side validation to still have some checks while you type, you know, you type in an invalid email, then you fix it, and it says valid without a server request. Still nice to have that, but you don’t need full on react in order to do it. So now we’re at a low JS payload, still getting the functionality that we want, again, you want that cart bubble, maybe you astro her add React in order to opt in to using React on the client. And also using a smaller global state library like nano stores. Or maybe you go full server, you opt in to doing dynamic requests in order to handle every user individually, because everyone has a different cart, you can ask her at node and use session storage to achieve the same thing. Both of them are opt ins, one server side, one client side. And then when you get to that performance audit, we realize, you know, HTML is what rules the world markdowns also what rules the world, it’s really nice to just lean on what the browser gives us, especially when the browser is making it so easy to do these days.

Ben Holmes 12:31
So when we flip that model, we realize, you know, you start with the static stuff. And you opt in when you actually need to reach for a tool and the framework, it’s on the framework in order to make all of these opt ins easy. And the Astro add command is my personal favorite way to do it. Because you don’t have to manage a package.json. And all of this kind of speaking to opt into compute compute being the real energy spender on whatever you deploy, starting with server than going to clients starting with pre rendering, then going on demand, remember what jam stack was talking about. And starting with static thing, going server lists edge, whatever compute that you want to run, as long as you start with static and understand your runtimes you can make the proper choice page by page. And you know, now we’re gonna get into VC funding mode, let’s talk about crazy numbers talking about market share, that may or may not be correct. But this is based on the data from website carbon calculator, which is a great resource to see how much carbon your websites might be producing based on their understanding of hosting providers. And if we look at that, the average you know, amount of co2 pumped out per page view is 1.17 grams according to many a metric that’s like the industry average. And if we were to lower that, and ship everything as just static HTML, which you know, ships almost nothing, we can make a huge dent in the amount of computing resources that the internet needs to survive. Obviously, it’s infeasible to run the internet on markdown. But if we were in some perfect world where everything was just an RSS feed, shout out to my Astro Co workers, I would love that kind of future, it would make a pretty decent impact on you know, the ozone layer. So I’ll also highlight Starlight here, which is Astros documentation starter. This is what got me thinking about these metrics, where we ran a little performance on it on an average star like page when you build your documentation with starlight and the co2 per page visits is fantastic compared to the other options. Again, it depends on how often that documentation site is visited. So you won’t see you know, crazy high gains. But if everyone were to adopt this across all of the documentation sites on the Internet, well, that would make a much more sizeable dent.

Ben Holmes 14:47
So we can start a little movement here if we’d like to. And now that we’re deployed, how do we maintain our site? I’m also going to check on time here. How am I doing on time, not well at all. So Let me run through a couple slides last minute here. But once we’ve deployed, how do we oh, wait a minute, I was 15 minutes ago. So I’m actually totally fine. I’m totally, I have 14 minutes left. Fantastic. I thought I had zero minutes, because I thought was a good day. Anyways, once we’ve deployed our website, how do we maintain our code? That’s what we want to get to after this. So first, we figured out hosting solutions, then we figured out the framework that we want to use. Now we want to debug problems as they arise and manage our website over time. So a loose definition of debugging based on my field knowledge and how things feel, you know, what does it take to mock, compile, run rerun, rerun your website to get to the root of a problem, right? Do you have to do a lot of console logs? Are there ways to step through the logic of your application with a node debugger? What options are available for you to find the problem?

Ben Holmes 15:59
So first of all, we’ll talk about the good of debugging because debugging has gotten pretty dang nice with full stack frameworks and web devs. In particular, they’re spoiled by instant feedback loops, instant gratification. If you you know, talk to an AI researcher, like my roommate back in college, he had to wait a half hour for models to run. And us in web dev, we argue about a website taking 200 milliseconds to reload tailwind instead of 50 milliseconds, we are so far into how the optimization chain, we’re counting the milliseconds, as we’re updating things in the dev server, we’re hot module reloading in order to preserve state on the page and only hot replace the parts that change. So we don’t even have to wait for the whole page to reload anymore. And we have j s across the stack. And when you do that, you are able to walk along, you know, the front end, the back end, even the database ORM and feel like you’re in familiar territory, which is always nice when you’re trying to get to the root of a problem, because usually people come from, you know, a one language background, especially if they’re like a junior developer fresh out of college or a bootcamp. So if you’re able to tap into that knowledge, then you get some really nice results. Little Venn diagram here, we’re using JavaScript for RPC endpoints, and for React front ends. And if you’re using drizzle, or Keithley, or Prisma, you’re using it to manage your database as well. And of course, we love VT stitching all of this together with frameworks like Astro and spell kit, because it makes the dev experience very good, we can see changes as we’re working on the problem. But there are some downsides to the flows that we’ve been adopted. The big one that I’ve been thinking about a lot is just how many things you have to stitch together to build a JavaScript website. It’s not the same as Ruby on Rails, or PHP and Laravel. You kind of know the landscape of tools across authentication, and deploying databases, and managing your UI systems trying out clerk and blade scale, whatever the new hotness is, in order to understand how the pieces fit together. And that leads to points of failure and a lot of things to understand different documentation tools that you have to go reference. And as we’re blurring the server and client boundaries with these full stack tools, we’re getting into some really complex territory, and how we bundle our code. If you have heard about React server components and how they’re being implemented, they’re basically having to rewrite how Webpack handles a React bundle, because you have, you know, you have your server components, which can import client side components, snap bundles need to separate what goes to the browser, and what stays on the server.

Ben Holmes 18:42
Then you have server actions, which are functions that run on the back end, that you can import into front end code. There’s so much that goes into that, because you need a nice bundle that can sort of strip out which parts deployed to which platform and they’re all sort of cobbled together in your source code, which makes it very difficult first off to have a fast Bundler. Webpack is definitely having to improve as it goes to make react server components faster in development. And it also means if you tried to test the server component, you don’t have a lot of tools, at least not right now. Because there’s so much sort of magic behind the scenes that you would have to mock out. It’s a work in progress. And also, those endpoints I mentioned, server actions, maybe trpc. Those are all tightly coupled that your front end and back in together. And I’m not really worried about the tight coupling, you know, like having the same API between a mobile app and your website. You know, that is definitely something you want to think about. But the real worry with coupling is it’s hard to mock everything and understand, okay, how do I make sure to trpc logic is working correctly, can I just import that into my unit test separately a RESTful API, so it’s a little bit different? I don’t know how to mock it. And of course, we have more and more tools on top of JavaScript, which require builds depths, you may have run into problems with jest using that testing library with ESM, as JavaScript is changing module systems, or trying to build your TypeScript quickly, in order to run it in a test environment, also mocking JSX files for spell files, or Astro files, which we don’t have a testing story for. There’s all these new file formats that also need to be adapted in order to be testable, which is yet more stuff that consumes energy to compile, and also consumes developer hours to mock out. And that leaves us to console logs, a lot of the time. Working with people in full stack, I rarely see the node debugger come out. It’s almost always a console log deployed to an edge runtime and waiting for that fraud only bug to come back to see what the heck’s going on.

Ben Holmes 20:46
And begin prod bugs, you end up in a situation like this. Well, this is for linters. Specifically, it’s something very familiar to me. But you run into a lot of CI and prod only books, because to get those fast dev servers that we were talking about, it means you have this different experience in development versus the production build, because production builds are too slow to mimic and development, which means you might run into prod only bugs that you can only test by pushing commit after commit to your environment. And letting CI wastes all those compute minutes to rerun your build, cycle, and redeploy, deploy for review. So a lot of energy. And you’d kind of feel in the dark, just putting one console log out at a time waiting all those minutes for it to rerun, and edge runtimes also have this issue because they’re not No, Jas. They’re kind of like No, Jas with this taken out, or it’s actually Dino what your source code is, and no JS, as we’re trying out these new environments. There’s also this weird discrepancy between what you’re used to just probably know, Ts, and what JavaScript is trying to become on whatever platform you’re trying to deploy to. It’s a blessing and a curse to make things adaptable to any environment. And you know, taking that to the CI CD level, it will increase your compute time with all the environments that you add testing on Linux versus Mac versus Windows, we definitely have that problem in Astra Corsa unit test, also upping it with build processes. If you have to recompile Typescript and recheck types every time to pretty inefficient server action, then you got to optimize. And also what pack can VT take a lot to compute that production build. And you’re shipping all those console logs. And every time you do you got to rerun all of this stuff I mentioned over again. It’s all very difficult. So what are some silver linings? What are some options that we have to improve on debugging? One that I wanted to point out was replay, you may not have heard about this, but it’s this interesting debugging platform that will and this is a live video of how it works. By the way, it is a platform that can read all of the system calls that your app is performing in production. And then it can turn it into an actual replayable video of all of the events that happened right up until a bug was found. So if you’ve heard of, you know, snapshot testing, or log rocket to walk through things, so those on steroids, we’re actually recording system level calls, and only on demand generating a video for you to step through. And I’m just playing with their playground here right now, where it generates an error whenever it submits a form, which is what was shown in that little player right there. And on the left, you can get access to your source code with all the source maps lined up correctly. And you can add little debug points to see where did this line of the program run. And you can see on this little timeline right here, okay, it ran a little bit before the bug happened. And then if I click on other parts of my code, like the top of the component, it will tell you oh, this component rerender, four times once in the initial stages, once when it was submitted, and once right before the error happened. And you can sift through all of those with lines inside of your developer console.

Ben Holmes 24:01
To tell you where each line was from specifically, there’s so much information at your fingertips. And it’s kind of just the tip of the iceberg. for them. It’s a very early startup. And the beauty of how they do it is they’re just storing the system calls. So it’s smaller than an mp4 file. It’s not rebuilding your website for you to step through all the debug so you don’t have to like ship a console log didn’t work ship a console log, you can just play with it in the live editor. And then it will only load the replay on demand. So you don’t have all these wasteful deploy previews. It’s only when bugs happen that you actually pulled the video back. So if that’s not clean and efficient computing, that helps you get to a problem quickly. I don’t know what it is. It’s kind of the best of all worlds. I definitely want to try out this tool some more. Other options of course, when we talk about bundling, people are kind of walking back of the bundler that they used to use Spelt is a big example they stopped using.ts files and switched over to Jas doc instead. This let them keep all that type safety that they had. And also keep all the user facing types so that Spelt is just as approachable if not more so because now you can click straight through to the source code. And they cut all of those release and CI, build top costs of running it through the TypeScript compiler every time because as we know, it’s pretty slow to run TSC, even just for type checks, also for processing your builds. And if you can cut that cost to zero, that means releasing to NPM is fast, it means installs will be smaller when you install spelt. Now, it’s much smaller than the bundle was before, which is savings on every NPM download ever really big for a large project. And you know, it’s less to mock out, it’s less for developer to spin up on if you want to be contributor to spelt, open VS code, type some code, and then you know, do a PMP and link, link it to your project, and you’re good to go. Nothing else to run, it’s really nice to strip back bundling when you can do it without any drawbacks.

Ben Holmes 26:00
And of course, you know, there was actually a study that I list down here if you want to go to it kind of a vanity metric. But running TSC can add a lot of energy cost to your builds over JavaScript scrappy metric. But it’s good to know that there is an energy cost that you can measure. And the last thing I’ll leave you with, you know, better alignment of development and production, that’s something that I would like to see going into the future. First off turbo pack is doing something pretty promising here. Right now they’re working on production builds and development, essentially, where instead of beats approach, which is just shipping your source code to the browser with very minimal transforms, Turbo pack is trying to do a full production build and rerun certain parts of that build as you make changes. And since it’s powered by rust, which is blazingly fast, it’s actually very quick to rerun those production builds on a page or layout basis, very early days, obviously. But I am definitely following their journey. Because if this does happen, if this does become a generic solution to building component systems, especially with React server components, well, that would definitely fix the dev prod problem, right? yourself. So Cloudflares Wrangler, which is a way to mimic their worker environments locally, and trying it out recently with their changes in version three, it’s really stable, it’s really easy to use the developer flows nice. You can type in TypeScript and not worry about TypeScript compilation, it kind of just gobbles it up for you. And it gives you some really nice docs and where to mock out connecting to a sequel lite database and other connections. It’s the best edge runtime that I’ve found so far. That lets you mock it locally, Deno Deploy, also worth a shout out there since Deno also handled Typescript and everything for you.

Ben Holmes 27:47
And I’ve been also thinking about is there a way to edit the code where you’re running it replay got me in that mindset, it seems AWS does have an editor available to play with your AWS lambdas and just type out changes and see them deployed live without having to hammer your CI with commits. So I’d be interested to try something out like that, as well. You can also think about StackBlitz, or CodeSandbox, all these other tools that give you online environments set, you don’t have to switch from local to production and back again, really the best line that you can draw is between you and Production Readiness, just directly so that you can debug problems. So answering the question can mark down, stop crickets, maybe not stop them. But if we compute less, we opt in more. And we debug beyond just console log statements. I think it can do it. I think we can. So thank you everyone, for the talk for coming to the talk. These are my you know social links if you want to join me, I’m always on the Astro discord astronaut build slash chats, you’ll find me around there helping you with content formats, especially Markdown and at the homestead on all the platforms. This is my little website WTW dot Dev, to watch me explain things on a whiteboard. Trust me, it’s fun, even though you didn’t get to see it. And these are my slides also be hosted on GitHub. So yeah, thank you so much for your time. Appreciate it.

Brian Rinaldi 29:19
Thanks so much, Ben. And I gotta say there’s some like unique irony. As you were talking here. Some of you may notice it got really dark. And as you were talking about stopping hurricanes with I mean, it looked like a hurricane back there. I swear to God, I mean, you can’t see it now. It started to calm down but like there was wind gusts blowing like tree the tree. I thought the I thought literally thought the power is gonna go out because it was like, look like hurricane force wind gusts going on there. So yeah, I I I for one think you did this somehow. I don’t know. I mean, you have some kind of magic you. You know you’re like oh, well we need to set The mood for a bright

Ben Holmes 30:04
it’s great for here too. It’s been wanting to rain all day. So I thought there’s something in me something. Yeah.

Brian Rinaldi 30:10
It was like literally the, the wind was the rain was going for, like horizontal because it was the wind was so hard. It was crazy. So yeah, Potter, who runs the show behind? He was sitting there looking at what was going on behind me like, Oh my God. Yeah. So alright, enough about the actual weather. And let’s talk about your talk. So first of all, I thought it was interesting that the way that you a lot of what you talked about, I mean, you talk about it from an environmental perspective, but it’s also like, okay, here are things we can do that ultimately will improve the performance of our app in general, as improving the performance of our app will lessen the power usage of that app, right. And then here are things also another aspect that improve the developer experience, because the less pain I have to go through pushing to production in pushing into my CI CD process for testing, the the more the happier, I’m going to be as a developer, so like, I mean, yes, it was framed from an environmental perspective. But all of these things, obviously make both the user experience and developer experience better.

Ben Holmes 31:23
Yeah, it’s just a through line. Really, I mean, it’s not a perfect mapping from like how much compute time to how much impact there is, again, like the hosting problem is going to be a lot of it, because where you run the code is going to influence things a lot more than what code is running. Like, if it’s on, you know, a green host, and you’re doing things statically, that’s going to have a compounding effect, beyond whatever you’re trying to do. But other stuff like that. Yeah, I feel like debugging is one that I want to explore more deeply than I even did in this talk. Because I’ve been a console log warrior for forever. And I just started playing with no debuggers, again, doing stuff at ASHA. And I realized, yeah, this really speeds you up. I should be doing this more. It’s just so hard with all these random tools to actually use a property mother. So I don’t know if you use a debugger in your day to day, but I just feel like I’m okay. Yeah,

Brian Rinaldi 32:21
I know, I tend to be like a console log, or you’re like you said, and, you know, and it is, it’s a pain. I mean, especially given one of the struggles I find, and with, you know, modern frameworks tends to be like, sometimes I don’t know where the code is running. Like, yeah, Next.js has this problem, I think, you know, spelt kids tend to have this problem to like work as in socket, it runs in both places at times. And unless you tell it, hey, no, run this in the browser, like or run this only on the server. But next, Jas has that problem all the time where it’s like you, you’re mixing front end and back end code. And so like, you know, trying to figure out where code is running at times is difficult. So maybe a debugger would help with that. I don’t know.

Ben Holmes 33:07
It’s early days, people are talking about, like, you don’t even know which console to open. It’s nice to have everything be in a familiar place. I know there’s a VI plugin that will pull your client console into your Server console. Or maybe it’s the other way, either way, it would get you to have all of your logs in one place. And just say this was server clients console log, and this was client. So then you don’t have to pop up in both of them to figure it out. Because sometimes something’s running in the client, you don’t even know like, you think it’s server always?

Brian Rinaldi 33:36
On? Yeah, exactly. Sometimes you hit in there, and you’re like, why is this not working? And then you like, and finally after, like, trying and trying, you’re like, you go to the other log, and you’re like, oh, because this code happens to be running on the server. And somehow I didn’t realize and so like it gets, I can see like it for developers, it gets complicated, even like the edge stuff you’re talking about like that, from a developer experience point of view, it’s really nice that this stuff automatically just magically deploys for you, like, Oh, I just write this piece of code here. And it goes out to the edge, it knows, you know, but at the same time, like from a debugging standpoint, it gets complicated, because you just you have to know where each little bit of code is going. And I’m like, you know, the edge, as you mentioned, in your talk presents a unique problem in that the node runtime that the rest of my app is running on is different from the edge runtime that that portion of code is deploying to, and so now I gotta keep, like, oh, I can’t use this NPM tool in that edge piece of my code that otherwise it’s all node, right. So yeah,

Ben Holmes 34:51
that it all runs in the same place. And then he also got a, you know, a new runtimes like, Bonner Dino, you know, you can deploy Astro to Dino deploy, but you don’t author your code in Dino, you author it in node, and you just rely on the bundler to do the conversion. And if it gets it wrong, then we definitely have a problem. But also like, even though it’s compatible, if you build it with bun, it’s not guaranteed to actually work on the production environment that you have. So you there’s a cost to the bleeding edge, you will definitely bleed if you try out different environments that aren’t the norm. And I would say node, node serverless, that is still the norm. And it probably will be for a little while. And it will take time before like Cloudflare workers are more of a norm. I’d like them to be. But yeah, you know,

Brian Rinaldi 35:41
I talk about this edge stuff a lot. And like, it’s, it’s, I think, your average web developer isn’t going to know, like, under the covers like that, oh, hey, if I deploy this same app to, to, you know, Netlify, and those functions, the edge portions are being deployed to Dino, whereas, okay, I’m going to take the same app deployed over to V XL, and those functions are being deployed the Cloudflare. And those are two completely different runtimes. I mean, I don’t know if they’re gonna run into like problems, but I can imagine you would, and but the average developer is just gonna be like, Well, why don’t get it why it works here. Why doesn’t it work there? You know? So, I think, you know, it’s great stuff. I love it. But it that’s it gets complicated.

Ben Holmes 36:34
Yep. Yeah. Because every runtime is different. There’s no alignment there. No, AWS lambda edge is technically node fly IO. It’s node but kind of deployed regionally, so they can call it edge. It Yeah, you need to know what your host is doing. You can’t just assume I go from one edge to another edge, and it’s fine. It’s the first time that the runtime is changing. It’s not like moving from one service to another, which is frustrating, or that if you need to do a migration, I luckily haven’t. But I can totally see that.

Brian Rinaldi 37:09
Yeah, yeah. I mean, most people don’t. But I just, you know, I think especially if you’re doing as you’re learning, you might say, Oh, I’m gonna try and deploy it here. And there, and just not knowing all of the underlying things are, is complicated. So we didn’t really need to know that before, especially when we were dealing with static stuff, right? Like, it was just like, all that happened on my computer. So I didn’t really like you know, the SSD side of it is like, you know, I didn’t have to think about it much because it just ran on my computer and the host, the host was just a static CDN. So,

Ben Holmes 37:42
yeah, yeah, it’s marked down, just to play markdown people.

Tags

More Awesome Sessions