— Conferences, JavaScript — 18 min read
As promised in the post on CSSconf EU 2018 I will now talk a bit about the JSConf EU 2018 that followed the CSSConf in the Berlin Arena (June 2nd and 3rd).
JSConf venue
As you're probably expecting the talks weren't exclusively technical (of course we got to see a few hands on kind of presentation) topics ranged from ethics, productivity, history all the way to the more technical realm with performance, user experience, machine learning, IoT and of course the Javascript language itself check out the conference full schedule.
Now I'll focus on a few interesting talks that I had the change to see on each day, I'll give an overview on the content that was more meaningful (in my opinion of course). I'll break down the talks into Day 1 and Day 2.
The opening talk was about errors (1), yes errors. I've highlighted this topic because error handling is often forgotten or skipped but shouldn't this be part of the modelling process and architecture of our applications? Well that's another story. Questioning type systems and discussing errors from a human perspective a pretty valid statement was pointed out on Javascript error handling mechanism, it is practically the same since it came out in ES3. Of course the language evolved in ways that try to mitigate predictable runtime errors with static analysis, the ES5 brought along the strict mode, strict mode is basically (talk quote) "disallow bad code that otherwise would be allowed according to the language grammar". Next a list of how typically errors are handled and here handled in the past was shown, from the classic just crash classic to exceptions, signals, options/maybes.... Some conclusions here were:
Other ideas that the speaker brought to discussion envolved recovery blocks and algebraic effects.
Next Daniel Ehrenberg talks to us about numbers (2). So the problem is that number representation in Javascript are limited to 2^35, as in:
1const x = 2 ** 53;2// x is 90071992547409923const y = x + 1;4// y is again 9007199254740992
According to this presentation long/ulong types where proposed back there (1999) in some of the first ECMA specifications, still they didn't make till the very end. But why this limitation in numerical representation in Javascript. Numbers in Javascript are 64-bit floating point binary numbers (IEEE 754 64-bit floats), and they're structure is:
IEEE 754 64-bit floats
So the 2^53 maximum number (9007199254740992) looks like this when in the binary floating point representation:
0 10000110100 0...(x52)
Adding 1 we get:
0 10000110100 0...(x51)... 1
that is 9007199254740994. So... Ok we get it, with a 64-bit representation it's impossible to represent all the numbers since you have the limitation of a 64-bit combination pattern to form numbers, at some point we need to round it, go up to infinity or throw an error.
But wait, is this a real use case? Yes, checkout Twitter IDs (snowflake) issue that made them add an id_string
field so that when Javascript parses the id it keeps this unchanged id in the string format.
The proposed solution is BigInt. Not some library such as bn.js, instead a native Javascript supported BigInt type.
1x = 2n ** 53n;2// x is 9007199254740992n3y = x + 1n;4// y is now 9007199254740993n5// note that if you try and add a number you will get a TypeError6y = x + 1;7// Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions
You can check the progress of the proposal at the github repository tc39/proposal-bigint, it is at the time of this writing in stage 3.
In my opinion may solve huge corner cases as the one exposed previously, still I think it will make arithmetic operations less error prune since we can eventually now run into TypeErrors for mixing numbers with BigInts. If BigInt is not explicit enough to developers we can start to fall into messy errors, but maybe I'm overreacting here.
(3) You most certainly have seen the Philip Roberts' talk on the event loop What the heck is the event loop anyway? | JSConf EU. If not, please stop reading this and watch that, it's way more important. In this next talk we dive into the event loop to learn that the event loop is a bit more complex than what you saw in Philip Roberts' talk.
To start we can think of the event loop as the main function of the browser, something like:
1while (queue.waitForMessage()) {2 queue.processNextMessage();3}
A quick look into how task queues work on web browsers. So first tasks are small unities of work to be executed from start to finish. Rendering pipeline in browsers is responsible for painting things in the browser. This pipeline can run when a task finishes, but the rendering pipeline has a separate time cycle and sometimes waiting is inevitable between the time a tasks finished and the time render pipeline runs again. Also if you have a task that takes really a long time to run the rendering pipeline has to wait, potentially your page will start to slow down at this point.
There are this things called micro-tasks (promises' callbacks are handled as micro-tasks). Micro-tasks are handled differently than regular tasks, micro-tasks are queued in a micro task queue, this queue runs after each task and while the queue is emptying other micro-tasks might be added and executed in the same event loop tick.
There's more. Animations have a dedicated queue as well the animation frame callback queue. When some animation tasks ends the event loop proceeds to the repaint, meaning that we don't wait up for new animation tasks that might appear, and it makes sense because if that happens it's because some animation was requested to be displayed in the next frame (thus in the next repaint).
At the end of these series of explanations we got the following pseudo code:
1while (true) {2 queue = getNextQueue();3 task = queue.pop();4
5 execute(task);6
7 while (microtaskQueue.hasTasks()) {8 doMicroTask();9 }10
11 if (isRepaintTime()) {12 animationTasks = animationQueue.copyTasks();13
14 for (task in animationTasks) {15 doAnimationTask(task);16 }17
18 repaint();19 }20}
And that should be it. Now a quick peek into Node.js. Node should be more simpler since:
A few interesting things:
setImmediate(callback)
is the same as setTimeout(callback, 0)
but it runs first!process.nextTick(callback)
all this callbacks will run before the promises callbacks.setImmediate(callback)
does something on the next tick.process.nextTick(callback)
does something immediately.Below the pseudo-code for the Node event loop:
1while (tasksAreWaiting()) {2 queue = getNextQueue();3
4 while (queue.hasTasks()) {5 task = queue.pop();6
7 execute(task);8
9 while (nextTickQueue.hasTasks()) {10 doNextTickTask();11 }12
13 while (promiseQueue.hasTasks()) {14 doPromiseTask();15 }16 }17}
Regarding web workers the only relevant fact pointed out is that they are simple to understand since each web worker runs it's own event loop on a separate thread and they are not allowed to manipulate DOM so no need to worry about user interactions here.
If you are interested in more of this you can check this very complete post Tasks, micro tasks, queues and schedules, it contains interesting animated demonstrations.
(4) More than a year has passed since the release of WebAssembly (WASM), it is still in its first steps towards what could be a game changer in web development (some say). In the next talk that I will mention WASM itself was introduced. So first of all WASM:
1(module2 (func $square3 (export "square")4 (param $x i32)5 (result i32)6 (return7 (i32.mul (get_local $x) (get_local $x))8 )9 )10)
That's enough WAT for now, you're probably wondering how can we use WASM modules within javascript. It's actually simple, you just have a small amount of boilerplate to load the WASM module. Let's import and use square.wasm
module.
1fetch("square.wasm")2 .then((response) => response.arrayBuffer())3 .then((bytes) => WebAssembly.instantiate(bytes, importObject))4 .then((results) => {5 const square = results.instance.exports.square;6 const x = square(2);7
8 console.log(x); // 49 });
Of course you probably won't be using many native WAT modules, you will compile your C, C++, Rust or whatever into WASM, using it the same way we did above.
I think a very strong point was a little too much implicit during this talk, performance was mentioned still, with WASM we will be able to obtain a more robust and coherent performance cross platform/browser and portability across operating systems and different CPU architectures.
As you can see WASM sits between our source code and creates an abstraction layer on top of the various CPU architectures. This diagram is from the talk, 'Dan Callahan: Practical WebAssembly | JSConf Budapest 2017'
If you want to look more into the benefits or what actually could be achieved with WASM I highly recommend the two following talks Real World WebAssembly (Chrome Dev Summit 2017) and Dan Callahan: Practical WebAssembly | JSConf Budapest 2017.
I couldn't have imagined a better talk to end day 1 (5). On stage we had Ryan Dahl inventor of Node.js. First Ryan gives us a bit of context on the talk, like how we wanted to build better servers with event-driven non-blocking I/O, and why dynamic languages are great (for certain kind of tasks), being Javascript the best dynamic language.
The talk had the following introduction:
"(...) using Node now looks like nails on chalkboard to me, I see the bugs that I introduced, I mean at this point they are not really bugs it's just how it works, but they are bugs. They were design mistakes made that just cannot be corrected now because there's so much software that uses it (...) It offends my sensibilities (...)"
So let's take look at the mentioned regrets:
require()
in Node semantics to look into package.json
and look through files, this made package.json
necessary to node applications, so we ended up with a centralized repository for modules. Ultimately NPM was included in the Node distribution.node_modules
folders... It gets big.
.js
? .jsx
? .ts
? Well in this one I agree with Ryan just write down the f*** extension!index.html
and in the same wave of thinking it should be cute to have an index.js
why not? Well it ends up that this raised complexity of the module loading system unnecessarily.And then a plot twist. At the end of complaining about Node.js Ryan presented a possible alternative to Node.js and how it could be better.
The alternative? deno, a a secure TypeScript runtime on V8. The main goals of deno are security, simplicity of the module system and support typescript out of the box.
(1) First talk we'll see on day two is strictly related with JS it's a more broader theme in regards of performance and how can we improve resource loading, let's get to know a few of this technique and show that HTTP/2 solo will not solve all your performance problems.
HTTP/2 will solve this.
HTTP/2 will certainly bring speed to the web, but the thing is that resource loading in the browser is hard. Performance is tighly coupled to latency, speed of light will not get any faster, TCP handshakes will not go away as well as TCP congestion mechanisms that penalizes us at the beginning of every connection. The way that browsers and servers interact does not allows us to take the best performance on resource loading.
What are critical resources?
A critical request is one that contains an asset that is essential to the content within the users' viewport.
A good loading strategy:
So let's look into some techniques...
What if we could tell the browser upfront what are the resources that we want to load? Resources as fonts are know are critical hidden sub-resources since we only know of their existent after the browser executes a few steps:
What if we could:
Provide a declarative fetch primitive that initializes an early fetch and separates fetching from resource execution.
So you can, preload with HTTP header:
1Link: <some-font.woff>; rel=preload; as=font; crossorigin
Or with markup:
1<link rel="preload" href="/styles.css" as="style" />
Shopify switch to preloading fonts saw 50% (1.2 sec) improvement in time-to-text-paint.
Imagine the following scenario:
The connection will be left for a while (red area) while the server is thinking, this will specially be the case if you have a server-side rendered application.
But, what if the server could predict that the next required asset from the browser will be the main.css
file?
Image the browser could send this at soon as it receives the first request.
1Link: <font.woff2>; rel=preload; as=font; crossorigin # indicate push via preload Link header2Link: <main.css>; rel=preload; as=style; nopush # use no push to disable push semantics and only use preload3Link: <application.js>; rel=preload; as=style; x-http2-push-only # disable preload semantics with x-http2-push-only
If you're server is HTTP/2 enable, it will read the above configs and initiate the pushes for you. Benifits? In Europe saving this 1 round trip time in a 3G connection could save us as much as 800ms.
But notice we still have an idle time at the beginning of the connection in index.html
. This idle time happens because only when we fully send the index.html, only then we initialize the push.
The goal here is to decouple the pushing behavior from our application HTML response starting the push right at the beginning of the connection flow.
Notice that we don't need to wait for the server to receive send out the index.html
, simply by knowing that the index.html
was requested we can start to push resources! Below simple snippet on how to achieve this with Node.js.
1const http2 = require("http2");2
3function handler(request, response) {4 if (request.url === "index.html") {5 const push = response.push("/critical.css");6 push.writeHead(200);7 fs.createReadStream("/critical.css").pipe(push);8 }9
10 /**11 * Generate index response:12 * - Fetch data from DB13 * - Render some template14 * etc.15 */16
17 response.end(data);18}19
20const server = http2.createServer(opts, handler);21server.listen(80);
Again, benefits?
Cache. The browser has the ability to say "Please don't send me that main.css
already got it in my cache.", but with HTTP/2 push what happens is that by the time the browser is saying this we have already sent the main.css
, so we have a race condition here!
Yet another blog post by Jake Archibald is mentioned in this talk, if you want to go with HTTP/2 into production checkout HTTP/2 push is tougher than I thought.
Also, the lack of adoption of push has made some vendors think in removing push from the HTTP/2 specification.
Fix the racing condition between push and browser cache. The browser sends information about what content has in cache so that server can decided what to push! You can check the CACHE_DIGEST specification here.
All this seems very complex, we need to maintain state. But HTTP is special and known for being stateless. The proposal of the new code 103 Early Hints is kind of a inversion of control compared to the CACHE_DIGEST this time the server will tell the browser upfront what resources are available for download, this is a small reply only containing headers and it happens of course before or within the server think time.
1HTTP/1.1 103 Early Hints2Link: </main.css>; rel=preload; as=style;3Link: </main.js>; rel=preload; as=script;4Link: </application-data.json>; rel=preload; as=fetch
In the speakers' opinion this is a lot more powerful than push since we're moving the decision process back to the browser.
Allow us to decorate our HTML in ways that we are able to prioritize resources explicitly with priority hints:
1<img src="some-image.jpg" important="low">2<img src="very-important-image.jpg" important="high">
(2) A very enlightening moment was the conversation and Q&A session with the TC39 panel. Not only I got an insider perspective on how things work within the ones behind for the mediation of ECMAScript specification but got to ear some of the upcoming new exciting features for Javascript. I'll leave below a resumed transcription of the most relevant discussed topics during the Q&A session.
TC39 is a committee of delegates how are representing members in ECMA international, they get together every 2 months for 3 days to discuss what proposals are up discuss what changes were in made. They operate on consensus which means that we all have to agree for something to move forward which is pretty unique in programming standards. Then there's the proposal process, that works like this:
private
keyword for private members. How did we end up with the #
for private methods and property access?There is two different things here. Private declaration and private access. Since javascript is not statically typed you cannot at runtime tell whether or not some given property is private or public. Then we ended with the #
to declare and access private properties, you can think of the #
as part of the name of the property like for instance:
1class SomeClass {2 constructor(prop) {3 this.#prop = prop;4 }5
6 getProp() {7 return this.#prop;8 }9}
These is a class feature that is part of a series of related separated proposals (that could possible merge into each other in the future or even break into more specific ones):
The idea it's to get the best of both worlds. Other languages such as Rust or Swift are largely influenced by both object oriented and functional paradigms.
Javascript and WASM are complementary as compile targets, so for some of the features that don't make sense in Javascript you can actually use WASM as the home for that feature where it could make more sense.
There was a proposal to add flatten
and flatMap
to the Array prototype. Is was implemented and shipped by Mozilla, but soon they realize tha this was breaking certain web pages. Basically some web pages were relying on certain implementations not being there, this if of course the worst that can happen to a proposal, we don't want to break the web. So we rollback and we knew that we needed to change the proposal in some way. In this case because the name itself was a problem (flatten
) we had to rename it somehow... The proposal author decided to send a joke pull request with a rename to smoosh
and smooshMap
, but it was not clear that this was a joke so... everybody freaked out.
Good idea, but very complex. 😎
Some of them take years, but at least a year to 18 months it's a more realistic estimation.
(3) A vital part of the Javascript runtimes are engines. V8 is the Javascript engine for Chrome, Electron and Node.js. In the next talk we'll look into fundamental parts that are common to V8 and all other major Javascript engines:
(Aside note, if you want to run Javascript directly in several engines you can install jsvu.)
All engines have this similar base architecture.
Regarding the important part (yellow square in the middle with interpreter and optimizing compiler) below are the main differences pointed for every Javascript engine:
So we can already see the that the base architectural components for a Javascript engines are: parser, interpreter and compiler pipeline.
Now, the most interesting part is around Objects and how they are represented within engines. So Objects are basically dictionaries like in the following image.
So an object has this string attributes that map to the value and metainformation of that property the property attributes according to the ECMAScript language specification. What do they mean this property attributes:
for in
loops.You can access this them in Javascript with:
1Object.getOwnPropertyDescriptors(someObject);
So another interesting fact around Objects is that they store they metainformation on a separate data structure so that the actual object only contains the values and a pointer to that data structure. The data structure that contains all the metainformation is called Shape (in SpiderMonkey, other engines have other names but hey are misleading. The computer science term for this is hidden class).
Know let's check how object declaration and property access are optimized in engines. Basically they build a doubled linked tree like structure that defines all possible shapes and each new added property only stores metainformation regarding itself. The Offset
just tells you where you will find the property within the JSON object.
But! This isn't always the case it turns out that for cases where you have already a shape that derives from a base object, but then you go and initialize some object in a different way (e.g. with some properties already), the engine will create a new shape as it is more efficient for engines to keep the shape's structures the smallest as possible. As you can see in the next picture a new shape will be created despite property x
being already in the first shape chain.
Then the main motivation for engines to have shapes is inline cache (IC). This mechanism stores information about where to find properties within an object so that we can optimize the property look up. Basically for a given retrieved property it stores the offset where the property was found inside the shape, that way you can skip the fetch of the property metainformation to get the offset, you just access it right away!
At the end two important notes:
(Note: I skipped arrays in the above talk as they are handled in similar ways and with similar mechanisms compared to objects.)
On this second day I would also like to mention two more talks, A Web Without Servers (4) because Beaker browser a peer-to-peer browser was presented here with a decentralized web in mind. Also Deep Learning in JS (5) because machine learning is a pretty hot topic and we should highlight the fact that TensorFlow.js already available which makes Javascript even more broad in terms of its applications.
"Whenever you are designing a program, there are things that you think it might be cute to add in... You always regret those. If they are unnecessary and simply cute, don't do them!" (Ryan Dahl)
1function justThrowAnErr() {2 throw new Error("some error message");3}4
5function fetchSomeRainbows(nRainbows) {6 if (!nRainbows) {7 return Promise.reject(justThrowAnErr());8 }9 return Promise.resolve("here u go");10}11
12fetchSomeRainbows().catch((rainbowError) => console.log("unable to fetch rainbows"));13
14// this will output: "Error: some error message"15// instead of "unable to fetch rainbows"
1<a href="" target="_blank" title=""><main></a>
you will want to set a role="main"
attribute if you want to support IE11.
role="banner"
on your main header not to be confused with other headers that you might
have.So, the event toke place at the Berlin Arena, an old building/factory kind of hipster place, very cool and very hot by that time of the year! Good thing water was free of charge and could be reached by a few steps from almost any place inside the arena. So yeah despite the cool space and stage set up, there where always free soft drinks (and frozen yogurt!) around to make sure one's never dehydrate. Talking about stage, aside the unbearable heat, the space was pretty cozy with round tables so that people could have laptops and other stuff on the table while assisting to the talks. Also worth mentioning, there were a few electronic music live performances by nested_loops and live:js, the sound and the visual effects played nicely producing a great show.
Throughout the venue there were exhibition stands of the sponsor companies, aside the goodies you could see products demonstration, talk to people about the company or even get a job interview.
Breakfast, lunch and dinner where included in the ticket, so we (me and my colleagues) agreed that it would be worth and try and so we venture into the vegan world (for me practically unknown at the time). The food was nice I mean, I had the opportunity to try a few dishes such as vegan hamburger, vegan gnocchi, vegan pasta, you get it...
Gnocchi Pesto
Vegan Risotto
At the end of the day, we just grabbed a bear near the river and enjoy the remaining sunshine (no, we could not use the pool).
You know Berlin right? The capital and largest city in Germany, also one of the largest and multicultural cities in Europe. Despite the conference busy schedule I took sometime (mainly my last day) to visit a few of the high rated places in Berlin such as the museum island, the Berlin Cathedral, the Brandenburg Gate, Checkpoint Charlie and a few other spots. It is really worth visiting, besides the obvious places that you would wanna check as the ones mentioned previously Berlin has a great urban structure with organized and large street blocks composed by beautiful buildings, huge and various green spaces and of course an endless nightlife cater for all tastes.
And this is me (left) and my colleagues on our last day in Berlin.
See you soon.
If you liked this article, consider sharing (tweeting) it to your followers.