Performance at Rest

Disclaimer: the author is the lead developer of a consistently poorly performing node web framework (as measured by framework benchmarks). I mean, it really sucks.

Benchmarking frameworks is fucking stupid.

Every few months someone comes up with yet another system to benchmark web frameworks. They setup a few simple scenarios like serving static content, a JSON reply, and sometimes rendering views or setting cookies. The typical examples contain almost no business logic. It is a theoretical test of how fast a framework performs when it accomplishes nothing.

In this scenario, the lighter the framework is – that is, the less functionality it is offering out of the box – the faster it is going to perform. It is pathetically obvious. It is one thing to compare the performance of various algorithms but when the biggest factor is how much other “stuff” is performed, you don’t need to write tests – you need to RTFM.

To those who occasionally bring up hapi’s poor performance on these ridiculous charts, I make two points.

First, hapi is slower than bare node and express because it does more. Don’t you want protection against your process going out of memory? What about event queue delay protection? What about client request timeouts? Server response timeouts? Protection against aborted requests? Built-in request lifecycle logging? Input validation? Security headers? Which one of these is optional? If you say most – hapi is clearly not for you.

Second, the Walmart mobile servers built using hapi were able to handle all mobile Black Friday traffic with about 10 CPU cores and 28Gb RAM (of course we used more but they were sitting idle at 0.75% load most of the time). This is mind blowing traffic going through insignificant computing power. Why would anyone spend engineering resources trying to optimize it when it is clearly performant enough?

But this post is not about how stupid framework benchmarking is.

To understand what makes benchmarking node different, you need to understand what is under the hood. Node is built using Google’s v8 JavaScript engine. v8 is a highly complex virtual machine with an ever changing runtime optimizer. Picking one coding style over another can carry with it double digit performance gains. For example, using a for-loop is often 80% faster than a functional for-each. This matters because a big part of making node applications faster requires constant tweaking to benefit from optimizations and avoid the blacklist of unoptimized code (e.g. any function with try-catch).

In addition to the optimizer, v8 has to perform continuous garbage collection. This is required to free up memory taken by objects that are no longer being used. In order to minimize its impact on performance, v8 tried to limit garbage collection to application idle time. Also, the longer an object “survives” garbage collection the less likely it is to be removed quickly when it is no longer needed. And the more stuff you do, the more objects are generated and need to be cleaned up.

The other critical component is the node event loop. The event loop is the “single thread” running your code. It is not exactly a single thread but as far as your application is concerned, it is a single threaded engine. Everything that happens in node is called from the event loop. It is a queue of I/O events and timers which trigger your callbacks – basically, your entire node application is nothing but a collection of callbacks.

What allows node to handle a large number of requests is the fact that most activities block the event loop for a very short period of time. For example, typical web requests require some database items. When those are fetched, node puts the request on hold and handles other requests until the database comes back with the item. Node requires this downtime to handle multiple requests. v8 requires this downtime to perform garbage collection.

When v8 is performing garbage collection, the event loop is paused. When a callback takes a long time to return control back to the event loop, all other callbacks, including expired timeout, are paused. If your business logic performs some calculation that takes 100ms to perform, you will not be able to handle even 10 requests per second. Simple math.

Why does this matter for benchmarking? Because these benchmark systems focus on performance at maximum load. They basically measure how many requests a server can handle under heavy load. The goal is to squeeze everything you can out of your computing resources. The problem is that under 100% CPU, node’s performance is dreadful.

At very high CPU loads, node’s event loop is fighting with the v8 garbage collector over resources. They can’t both run at the same time. This means that instead of getting the most out of your resources, you are wasting energy switching between two competing forces. In fact, the vast majority of node applications should be kept at CPU load levels of under 50%. If you want to maximize your resources, run multiple processes on the same hardware (with enough margin for the operating system).

If our production servers show more than single digit CPU load, we consider that a significant problem. If your node process is CPU bound, you are doing something wrong, your deployment is misconfigured, or you don’t have enough capacity.

What makes things worse when doing this sort of benchmarking is that the load is almost exclusively blocking because there is no business logic to go and create that downtime. Most of the internal framework facilities, such as parsing headers, cookies, and payload processing are blocking activities that require better downtime management than an application with empty business logic provides.

There is still great value in benchmarking applications. But if performance under load isn’t meaningful, what is? That’s where performance at rest comes in.

Performance at rest is the best-case-scenario of your application under no load. It’s how fast you can drive from point A to point B without anyone else on the road. It is a very significant number because it directly translates to user experience and relative performance. In other words, if your server can do unlimited number of requests per second, but they each take 60 seconds to complete, your amazing capacity means nothing because all your users will leave.

Measuring performance at rest is actually a bit more involved than just running a single request and measuring how fast it takes to complete. This has a lot to do with the v8 garbage collector and the v8 runtime optimizer. These two are working for and against your application. The first time you make a request, nothing is optimized and your code will be very slow. The 10th time you make a request, the garbage collector might kick in and pause it in the middle. Testing once is not enough to get real numbers.

What you want to do is come up with a scenario in which you are making multiple requests continuously over time, while keeping your server CPU load within an acceptable range.

This is where slow performance indicates a problem. If under these conditions, and with the feature-set you require, your web framework is performing poorly, it should be fixed or replaced. If the overhead of the framework is making your requests too slow at rest, the framework is either too heavy for your use case, or is under performing and should be fixed.

Understanding your application’s performance is critical. Benchmarking without taking into account the very nature of your platform is harmful.

Dear CEO (of a node-powered corporation)

First congrats! You didn’t force your developers to only use those “proven” technologies and allowed some innovation to invade your organization. You now get to join the club of companies using node. That’s pretty awesome. Node is going to significantly improve your company’s productivity, ability to hire top talent, keep your developers happy, and get back to building products, not boilerplate and abstractions.

But as with any cutting edge technology, node comes with its own risks. Node is proven but it is also very new. It is in its most critical phase of achieving mass adoption right before it is fully baked. This means complexity is at its highest level, right when the contribution payoff is at its lowest. In other words, most developers are not motivated enough or skilled enough to move it forward.

This is where you come in. But first, a quick story.

A couple of weeks ago the folks at ^Lift Security identified a flaw in v8, the JavaScript engine node is built on top. This particular flaw caused memory to leak when a certain exception was thrown, and it was an exception particularly easy to reproduce. In other words, it made it pretty easy to take down an entire site built on node if it wasn’t setup with sufficient capacity and restart automation.

The good news was that this security hole was quickly identified, corrected, and a patch released. The bad news is that the patched version introduced a new bug. This is par for the course in software development. Shit happens.

The patched version came out on a Thursday. Most companies grabbed it on Friday. On Saturday morning, when I upgraded my own development environment I discovered that this new version breaks a feature in hapi, our enterprise-grade open source node framework. The specifics of the bug are somewhat “amusing” – it caused timeouts set with milliseconds fractions to basically get the entire node event loop stuck. Now, why would anyone set a timeout using a floating point number? Well, that was another, very old bug in hapi that never mattered before.

What makes this bugs combination even more “amusing” is that it was in the code responsible for keeping server load under control. With these two bugs, servers would stop working altogether under load instead of handling it. Slightly different from the intended outcome.

So – Saturday morning, major security bug announced, companies upgrading their environments, and our framework cannot work on the new, safer version.

Under past circumstances, we would have contacted the core team via an issue and IRC, and waited for them to find the time to identify and fix the bug. And usually that would work well. The problem is, I am among those responsible for the development of a system that’s becoming more and more critical to the bottom line of a gigantic operation. Sounds familiar? This is an unacceptable risk.

But this story has a happy ending! Within an hour of me identifying the issue, Chris Dickinson – our in-house node core contributor – was able to identify the root cause, and together we released a patched version of hapi with a workaround. This is the kind of SLA an operation like Walmart requires.

Back to you.

Node is ready, today, for taking on the most critical components of your business. But like any cutting edge technology, it comes with risks. These risks can be easily mitigated by making sure your have the right team and right resources available to you. Access to a node core contributor is absolutely essential. This is not a luxury.

Let me make it absolutely clear: if you use node for any serious business (and I will leave it up to you to define what “serious” means), you are being irresponsible to your company and shareholders if you do not secure the appropriate access to node core resources under an SLA.

There are a few ways to gain such access.

The best of course (but also the one with the biggest commitment and probably highest price tag) is to hire a full time developer to work exclusively on node core. But like any business decision, this has to be justified and will likely only make sense at a price point that’s as expensive (or cheaper) than paying someone else for the same SLA.

If you are not quite there yet, consider contracting a part time consultant or hire a company with such resources under an SLA that fits your needs. It is pretty easy to find such providers. Joyent provides this service as part of their SmartDataCenter product (as well as some limited support for Linux). NodeSource is a new company (made out of some of the most experienced node developers) offering a comprehensive solution. There are a few more, just ask around.

This is not only smart business, it is also the right thing to do. It provides crucial support to a technology you directly benefit from. It is the easiest way for you to pay back and support the community. It will also earn your company plenty of good karma points, which you will find handy when it’s time to hire the best talent.

Not sure how to go about this? eran@hammer.io

Names and Diversity

(Previously titled Nipples and Poop)

Last month I got to experience a childhood dream, one I never imagined possible. I got to sit in the front row and watch Monty Python live on stage. Twice! It was magical. It was the best 40th birthday gift to myself possible – getting to relive being 10 with the fully emotional impact of reliving well memorized moments.

I grew up watching VHS tapes of the Flying Circus. It had tremendous influence over my humor, but more importantly, the way I look at life. The absurdity of it all. The total disregard for institutions and sacred cows. If you’ve ever spent an evening with me, I am sure you’ve heard some fucked up stories about something I did against the very fabric of the institution I was part of – school, army, college, work. It’s who I am.

When I set course on hapi, an explicit goal was to change the way enterprise software is created.

Not just technically, but culturally. The configuration architecture was designed to make it simpler for entry level developers to jump right into complex requirements. The plugin architecture was designed to support a large team by breaking up large monolithic systems into smaller, self-contained parts. And the module names, logos, and references were designed to make people smile and stop taking enterprise engineering so fucking seriously.

Not everyone finds the same jokes funny.

People who grew up loving the Ren and Stimpy cartoons come into the hapi world with a grin on their face. A sense of giddiness from bringing that world of silliness into their day job. Others find it silly and just ignore it dismissively. That’s ok. The trick is to know who you are going to offend and lose as the price of making a joke.

When I was asked to name a hapi plugin that takes automatic core dumps when the process fails, I named it ‘poop‘. It was a perfect pun. We now have a module that very serious ops people at large companies, my employer included, have to use and they have to say ‘poop’ in their very serious meetings. This is powerful change, and it is because it is silly.

Sure, some people find it offensive enough not to use, and that’s fine. It’s a tiny module that is trivial to recreate. It’s not like I named the entire framework ‘doodie’. But the key here is that the group of people who might find ‘poop’ offensive isn’t exclusively any segment of the population. People who take themselves too seriously are not a protected class.

That’s not the case with ‘nipple’.

The nipple module was initially created as an internal component that no one was meant to use except for those working on hapi core. I know this sounds like an excuse for picking an inappropriate name, and it is, but it was also what was going through my mind – a public private joke. And I’m sorry for that.

The problem is, that in the larger context of a community built around the hapi framework, this turns off women from using and contributing to the project. That’s unacceptable! There is no acceptable rationale for creating an environment hostile to any segment of the population.

Creating an environment in which a woman is forced to say “nipple” to a predominately male audience is unacceptable. I don’t think that requires any explanation. It might also create a situation considered sexual harassment in many places. This has nothing to do with political correctness which is all about appearances.

What is interesting about the ‘nipple’ experience is that no one brought this issue up. I’ve had very open, frank conversations with women about making a significant shift in diversity within the hapi community and while other topics came up, this didn’t (even though it turned out to be on their mind). But when I asked plainly on Twitter what did people think, the response was strong, quick, and overwhelming.

The issue only came up as part of my review of all hapi language for potentially offensive words or expressions. I have made it my goal to dramatically change the makeup of the hapi community. I want to create a project that’s the role model of inclusiveness and diversity. The gold standard in how to build the most inclusive and safe environment in open source. Clearly we have a long way to go.

A big part of that includes reaching out to people and soliciting contribution. You change a community by starting with the diversity of its leadership. So I set to contact people from under-represented groups within the hapi leadership. All of a sudden, I felt a bit uncomfortable asking a female developer if she wanted to take lead on ‘nipple’. It stopped being funny in my head.

An hour after asking for feedback, the ‘nipple’ module was renamed to ‘wreck’, a pun on ‘req’ (common short name for ‘request’ in node). It’s still silly. We are going to continue and review the language used around the project and solicit feedback. I am going to continue asking questions, and I am confident we’ll get this right.

Bringing this topic up surfaced some unhappiness with our use of non-descriptive (and outright silly) names for modules. Turns out, a lot of people don’t share my sense of humor. No surprise there. But that’s missing an important point. hapi was created to be silly, to change the stiff corporate culture, one silly module name at a time. We take our code more seriously than most.

Looking at the audience at the Monty Python show, gender diversity was very much present. Silly humor doesn’t automatically translates to a boy’s club environment. The burden is clearly on me (us) to make sure that’s the case, but I am not ready to give up on silly.

I think the line between ‘nipple’ and ‘poop’ is clear, between offensive and silly, but this perspective, of course, is open to a community debate.

Open Source ain’t Charity

We’re spending real money on open source. Since hapi has been almost exclusively developed by the mobile team at Walmart, we had to justify the significant expense (exceeding $2m) in open source the same way we justify any other expenditures. We had to develop success parameters that enable us to demonstrate the value and to make on-going investment sustainable.

The formula we constructed produced an adoption menu where the size of the company using our framework translated to “free” engineering resources. For example, every five startups using hapi translated to the value of one full time developer, while every ten large companies translated to one full time senior developer. We measure adoption primarily through engagement on issues, not just logos on the hapijs.com website.

These number change a couple times a year as the nature of contributions evolve, but they provide a solid baseline for progressive comparison. By having a clear way to measure ROI, we can justify more resources. It allows us to clearly show that by paying developers to work on hapi full time, we get back twice (or more) that much in engineering value. Same goes for sponsoring conferences. It all has to translate back to measurable engagement.

Of course, not everything is just numbers. Since Walmart tends to adopt hapi features about six months after they have been introduced, the value of external early adopters means significant quality and stability boost. We are also among the top work destinations for node developers. We have been getting about a dozen qualified candidates for every node opening we advertise. But while these benefits are important, they are very hard to quantify and we rarely rely on them to justify investments.

When we’re asked to sponsor an event we look at the community the event is serving and the impact a sponsorship can have on our adoption benchmarks. Unlike many other companies, we don’t have an evangelism budget. We sell goods, not APIs or services and our current interaction with the developers community is limited to hiring.

If this all sounds very cold and calculated, it’s because it is. Looking for clear ROI isn’t anti-community but pro-sustainability. It’s easy to get your boss to sponsor a community event or a conference, to print shirt and stickers for your open source project, or throw a release party for a new framework. What’s hard is to get the same level of investment a year, two years, or three years later.

What is even harder is to justify hiring a full time node contributor and other resources dedicated solely to external efforts. But with a strong, proven foundation of open source investments, even that becomes an obviously smart move – by the numbers.

Open Source Dickishness

Yesterday, to the surprise of the express community, the framework’s creator and longtime maintainer TJ Holowaychuk sold the project to StrongLoop, a commercial node services startup. The move came as a shock to the project active maintainers who have been responsible for the framework exclusively since early January. In a clumsy transfer of ownership, the people actually responsible for the last eight months of the project lost their commit rights (which was later restored).

In a blog post, StrongLoop announced the move as a great next step in the evolution of the project. The blog post masks a commercial transaction as an act of good will by calling it a “transfer of sponsorship”. If all they wanted was to “pitch in and help”, why did they need to take over and move the project? Why is their first public act a blog post and not a pull request?

There is no excuse for violating one of the basic rules of open source – taking a project away from its rightful maintainers. It is also bad form to sell open source maintainer rights (as opposed to trademarks which is pretty common, if obnoxious practice).

The thing about successful open source projects is that their success doesn’t come from the project creator, but from the contributions and adoption of its community. Express’ success has much more to do with the people who chose to use it than the work of one individual, even if he “is responsible for ~95%+ of the project”.

When TJ Holowaychuk lost interest in maintaining Express, he did the right thing (for a change) by letting others take over and keep it going. In open source, that meant the project was no longer his, even if it was located under his GitHub account – a common practice when other people take over a project.

Keeping a project under its original URL is a nice way to maintain continuity and give credit to the original author. It does not give that person the right to take control back without permission, especially not for the sole purpose of selling it to someone else. Not to mention the fact that Express already has a GitHub organization ready and eager to take over the project.

What makes this particular move worse, is the fact that ownership was transferred to a company that directly monetizes Express by selling both professional services and products built on top of it. It gives StrongLoop an unfair advantage over other companies providing node services by putting them in control of a key community asset. It creates a potential conflict of interest between promoting Express vs. their commercial framework LoopBack (which is built on top of Express).

This move only benefits StrongLoop and TJ Holowaychuk, and disadvantages the people who matter the most – the project active maintainers and its community.

Update: TJ Holowaychuk posted his account of the events.

The Fallacy of Tiny Modules

There is this myth, that if you break software into many tiny, super focused pieces, life is better. Bullshit.

Remember when object oriented languages were the shit? Breaking a complex system into small discrete pieces, exposing an abstracted interface, and reusing code by keeping everything highly specialized? Wasn’t it fun!

Call it what you like. Microservices, tiny modules, components, whatever. The bottom line is simple – at some point, someone has to put it all together and then, all the complexity is going to surface and the shit will hit the fan. Microservices are a nice idea and can be a valid architecture decision in some cases, but let’s not pretend that the people running the overall system are going to like it. Trading a large code based routing table for a large load balancer configuration isn’t better (actually, it is usually way worse).

Those promoting the idea of a utopia with a million tiny node modules living happily on npm keep using UNIX as their winning argument. The problem is that it is a false comparison. Small focused components making up the UNIX shell environment are only viable because UNIX is part of a curated distribution. Someone did the nasty hard work of picking a baseline functionality for you. More than that, that collection has been defined into a standard so that whatever UNIX flavor you are on, you can feel right at home.

Imagine if UNIX came with nothing but the kernel. No distributions. You install an empty kernel and then use some package manager to pick your copy, move, and list commands. Not much fun anymore, is it?. But wait, now go pick your flavor of grep, sed, and awk. Now make sure they all work well together and use the same flavor of regular expressions. Now make sure you keep everything up to date for security and stability.

This is what frameworks provide. In practice, a framework is a curated collection of functionality provided as a distribution. By picking a framework, you are picking a “single purpose node distribution” suitable for your application needs. The framework should allow you to swap the components with others but that baseline is the main value proposition. You buy into an ecosystem and in return you get a usable environment out of the box.

Tiny modules are a useful tool and a valuable design because they allow the construction of highly specific environments. But in most application development environments of any meaningful size, someone has to be the curator. Someone has to pick what’s being used and keep the collection both in sync and up to date, just like every UNIX distribution does.

You can argue that this someone is the developer building the application but that’s just not practical for the same reason you let others pick your operating system toolbox, your hardware toolbox, and even within node, your built-in toolbox. And this is just what’s under your web app. Don’t forget the entire front end side on top of it. Drawing the line at “everything on top of node” is an impractical and unproductive choice.

This is why frameworks matter. Go pick one.

(The author is highly biased as the maintainer of one such framework)

 

Don’t Be a Bully

I don’t know Brendan Eich. I don’t know much about him other than (1) Mr. Eich created JavaScript and (2) made a contribution to support proposition 8, aimed at taking away the right of gay couples to marry in California and voiding the marriages already taken place, including my own.

When the news about Brendan Eich’s contribution became public a couple years ago, the reaction within my community – the web development community – was pretty strong and one sided, deriding and marginalizing him. Brendan Eich didn’t do himself much favor with a blog post full of grand standing and lack of empathy. I did agree with his basic argument though.

Being a bully is never a productive strategy. It might be satisfying but it is counterproductive and shows the same lack of empathy that is at the root of the issue on the other side. If you limit your social and business circles to people who only share your exact social ideal, you are actually taking part in sustaining the status quo. There is nothing more powerful than an open dialog.

With his appointment to Mozilla CEO this week, the story got resurrected and many people I love and admire expressed their disagreement with the promotion solely on the base of his contribution. This is unfair. From all accounts, Mr. Eich has never applied his (assumed) personal belief to his work or to others, and pretty much everyone who chimed in has no real first-hand experience with Mr. Eich.

I don’t participate in parades or demonstrations. I am not very active beyond voting and making political contributions. But I am confident that by living my life in the open, by engaging those around me, I am making a positive impact on the lives of other gay people. I constantly invite my friends and coworkers to my house for dinner with their family, exposing them to what is often their first same sex family experience. I seek people and share my personal experiences, explaining why their positions are hurtful.

I strongly disagree with the claim that one can donate or vote to something like proposition 8, and be ignorance-free, hate-free, or bigotry-free. Try walk a mile in the shoes of most gay people, especially during their teen years in a society that still sucks today with its treatment of gay people and tell me you still stand behind that claim. But that’s also my point – it is very much my responsibility to share my feelings and experience with those who disagree with me in hope of seeding this understanding.

It is sad how many people lack true, actionable empathy for people who are still today being beaten, abused, derided, mocked, disowned, dismissed as an abomination, lynched, or executed. That even those who support gay rights are not sufficiently open minded about the wide range of gender expression that doesn’t fit their norms.

Instead of posting comments on Twitter aimed at specific individuals, consider sending those you disagree with an email explaining to them in  personal tones how their actions hurt you and impact your life. There is nothing more powerful than a personal interaction to change minds. I’m not saying you can change everyone’s mind, or that everyone would be open to engage, but you should at least try.

You don’t get to take the moral high ground unless you actually climb there first.

On Being (Mentally) Well

tl;dr – 1 in 4 people suffer from a mental disorder. If you feel depressed, anxious, or otherwise unhappy for more than a few days, please reach out to a friend, a family member, or a professional. If you feel alone or isolated, know that you are very much in good company with almost 60 million others (in the US alone). You might be surprised to know that many of your friends are (or have been) in therapy or use medication to help with their mental health. Regardless, educate yourself about mental health and make it known to your friends and family that seeking help is a sign of strength and that you are there for them. Please reach out today.

The past few weeks have been a sad and frustrating reminder of the painful toll depression and other mental disorders take. Right before New Year’s, our bay area community lost Conor Fahey-Latrope, a talented C++ developer. A few days later, Luke Arduini, a prominent node.js developer went missing. Sadly, they were not the only ones.

Continue reading

Speakers Creativity Budget

Node in the enterprise

TL;DR – if you are producing a conference, please offer your speakers a ‘creativity budget’ to make their presentations better.

I’m been a public speaker for a while. I derive great pleasure from speaking to a live audience, big or small. While preparing for and then delivering a talk takes huge amount of my time and energy, I keep accepting more speaking opportunities because it forces me to push the envelope on my craft. That is, my engineering, creative craft.

I set very high standard for myself (which I usually fall short of, but isn’t that the point?) which include:

Discovery

  • Talks should be entertaining first, educating second
  • Slides and props are meant to delight and excite, not document or narrate
  • Never repeat a talk (training sessions excluded)

For the same reason I believe most developers should not do design, I contract the artwork for my presentations. Over the past few years, I’ve enjoyed a fantastic artistic collaboration with Chris Carrasco who has created all the artwork used in my presentations. I have also learned to rely on props and other costly production elements. These all play a significant role in enhancing my talks.

They also cost money.

realtime

Most of my talks this year cost around $500 to produce. Some much more.

My ReatimeFood presentation cost over $5000 (which was paid for jointly by &yet, me, and the 24 participants who sat the special tables where food was served). My Fuck OAuth talk cost $1200 on artwork and shirts (and it would not have been as good without the shirts – it was absolutely an essential element). The Leek Seed bedtime story at NodeSummit cost $450 to produce (and it will be the main thing anyone will remember from that talk).

NodeSummit2013

Creativity is expensive and I’ve been fortunate enough to have the means to cover these costs out of my own pocket (I rarely ask my employer to cover these costs since they don’t really benefit from them). You can see a sample of my slides on the right and can find some of my decks here.

Quality conferences like NodeConf and RealtimeConf have long offered to cover speakers’ travel costs. They are produced by people who care deeply about quality and they recognize that top speaking talent demands top treatment. Conferences are business after all. But I think we need to go one step further.

I’d like to propose a new speaker benefit: a creativity budget.

Fuck OAuth

This is pretty simple. Each conference will make available a budget to reimburse speakers for costs such as artwork, props, hardware, or other materials that will enhance and elevate their presentations. For most conferences, I would set this at $300-500.

This will work similarly to how travel is covered today, by reimbursing speakers for submitted invoices, or by the event produce paying the costs directly. I would also encourage the organizers to promote and push speakers to spend the money. Almost every presentation can benefit from higher production value and the conference as a whole will be elevated. There is a reason so many people attend conferences these days, just to stare at their laptop all day.

Sled and OAuth 2.0

As for how to fund it, there are many creative ways. Asking for talk sponsorship, selling premium experiences, asking those with means to crowdfund it, or simply charging a bit more for tickets in exchange for a better conference experience. We’ve seen conferences with incredible production values over the last couple of years, but we have not seen any noticeable improvement in the quality of the talks. Let’s fix it.