Bad Design and the Evolution of the Web

In my previous post about Web Design Irks one of the points I discussed was that this fixation on “streamlining” would cause issues with usability. Particularly, by creating an experience that feels streamlined, it negatively affects slower connections and makes linking to the specific state of the page impossible.

What I may have only mentioned in passing was how the fixation on Javascript in general is an issue. Heavy use of javascript to control a page has resulted in the creation of web pages – or worse yet, websites billing themselves as “apps” – that assume the user will never touch the browser’s interface and has a fast enough connection to interact only with the links the design has provided.

In case it wasn’t obvious, I hate Javascript.

So today Google took it upon themselves to make a particularly dangerous design decision – they removed the URL. Specifically, the address bar is now a field where you can enter the URL, but it no longer retains it, returning it into this button to the side that’s representing the root of the website you are viewing.

As the link above supposes, it seems like a design choice meant to improve security and not scare of the technologically inept. I am particularly iffy about the latter point, as simplifying UI design and hiding away important information doesn’t make something less scary for someone, it makes things a lot harder for those that have learnt to deal with technology and user interfaces.

The link then goes on to mention the thing that I loathe: single page apps powered by Javascript. It also proceeds to link the video that’s seen here. The video does have a bit of a lengthy introduction — as intriguing as the history of phone numbers and the interaction if server and client with web pages is — the thing I want to draw attention to comes further down the line, starting around 15min, and is actually summarised rather well in the description: Javascript developers treat the URL as an afterthought.

Now the issue I’m trying to bring to light here is that it’s becoming increasingly difficult to be on a page that’s true to its URL. As the video describes, pages that vary based on certain conditions started way back, and of course it’s outside javascript — PHP is what is generating these blog pages. I don’t have an issue with the concept of pages changing themselves or having fluid content, I don’t want to return to the early web in its purest form.

My issue is when this is taken too far. I know that if I link to a certain post on my blog, that all users will see the same page for the most part. If a user is on a mobile device, the layout might be different. However not all web pages are like this; as I’ve mentioned issues with Tumblr before, I may as well make an example of it again. I can’t link to the page to make a new post on Tumblr the same way I can for WordPress, because the post form is generated within the dashboard. For some tumblr themes, I cannot even view the index content unless I have unblocked certain javascript, so the page actually generates the main content. This is atrocious design.

The reason I bring this all up again is because the links above that talk about the URL bring to light the reason I take issue with things like Tumblr generating the post form, or certain blog layouts not even loading content without javascript. The reason is that those pages’ initial states should be related to their URL. The URL is the single source of truth about what’s on a page at any given time. This is good phrasing. When I am in the URL that is defined as Tumblr’s dashboard, I would expect to only see Tumblr’s dashboard, not the posting interface as well. If I wanted that, I’d refer to the new post URL. ... well, if it had one.

Deviantart might give a better example of the above concern; if you are to view any piece of art and go to view another through “More Like This”, the content and address bar are changed through the use of javascript – and sometimes the address doesn’t even change accurately, because the artist domain may be incorrect. (I’m sure that the changing of the address bar through javascript itself is a major security risk that can be used to trick the user into thinking they’re on a particular page.)

This inaccuracy follows through when someone links me artwork they’ve reached through link hopping: I have to wait as their original location is loaded, and Deviantart’s scripts then clear and reload the content that I asked for in the address. To apply this to bad Tumblr design; if someone links me to a post within a javascript heavy design, instead of a straight-forward post I have to wait for the page to generate in front of me (after the page has already generated server-side).

Assume that I have javascript disabled. The initial state of the page is the URL’s actual nature: for Deviantart it’s a different artist’s Gallery index, for the bad Tumblr theme it’s a blank page with a spinning gif image. The URL is the single source of truth about what’s on a page at any given time? But what I’m seeing is not what someone wanted to link to me, because the javascript programmer’s decided that the URLs don’t matter.

So lets look back at the fact that Chrome could do away with the URL as a fixture on the browser UI. Firefox is almost sure to follow in Chrome’s footsteps, along with other browsers. The URL could become subconsciously forgotten by users, ignored more fervently by programmers, and once we are always sharing links via share buttons, we will have lost the ability to use the URL to specify what we want to see, share, or refer to. There is no URL that is is purely Tumblr’s posting interface, it is counter-intuitive to get the URL that is purely a specific artwork post on Deviantart, and don’t even mention infinite scrolling designs...

Some people know already about the concept of the filter bubble. What happens if we apply that here? If it’s already difficult enough to refer to the content we expect, even when the conditions are right on both ends of sending and receiving a link (i.e. javascript is enabled) there is the more insidious concern of hidden filter bubbles. While at this point in time — to my knowledge — these only extend as far as search results in search engines and maybe some news sites, the use of user history and metadata could begin to affect much more than search results or advertising in the future.

The internet is evolving, but we need to pay heed to the spread of fads and experimental choices in design that appeal to the ignorant masses. I’m not saying we’re not allowed to make mistakes, I’m saying we need to realise when we’re making a mistake. The heavy use of client-side generation, disregard for what URLs actually mean, and the use of filter bubbles under the pretence of user benefit; these all could mean that sharing information in the future won’t be as simple as sharing a link, because we will have to follow the tools provided by designers (or worse yet, copyright holders) to set up the conditions that will generate the same result for those that use your link.

Just imagine a future where all cars are self-driving without the option to drive the car yourself. All routes to all locations will follow specific guides no matter where you are, because the self-driving system that has been created has to get itself to certain hub points and then refresh co-ordinates towards your destination. That’s the kind of future I see for the web that keeps going with the strange, awkward design decisions that are so prevalent now.

Thursday, 1st May 02014

blog.