The GitHub experience

On why I moved from Kirby and chose to build a static site with Hugo

GitHub is one of my favourite websites of all time; Dropbox is perhaps its only competitor. Neither of these would rank high in terms of design but then again the core purpose of both these websites is functionality and, in that regard, they are both at the top of their game. And that is why, as of today, both this website and Physics Capsule run on a combination of Github, Dropbox and Netlify.

Although I have been using GitHub for quite some time now, I had never quite dipped into the console elements involved in Git. Understandably, most would find this laughable; but for long I found great use for GitHub purely as a browser-based code storage and release service that made it easy to track my code and gave my code a safe home on the web—albeit a non-negotiable, extremely public home, at least until recently.

Having once had a taste of deployment from our personal machines, none of us wanted to go back to a browser-based website with a clunky dashboard.

Necessity breeds the acquisition of knowledge. Physics Capsule, one of my favourite side projects, fell by the wayside last year thanks in large part to both my and my fellow co-founder’s weddings falling just months apart. This is perfectly understandable and neither of us complained. There were also some software woes despite us having recently switched hosts and rewritten a lot of the code from the ground up. I saw this as an excellent opportunity to do something radical that would not only shape Physics Capsule but would help us maintain it with greater ease and ensure the stability of the project in the long run.

Hugo and Google Drive

Having given static sites a lot of thought and having carefully watched their development over the years I decided now was a good time to take the plunge. Some of my readers will recall that I moved away from Wordpress and to a flat file CMS called Kirby about two years ago. I have nothing against Kirby except that its v3 license became a bit too costly for me to justify the expenditure and the host of changes going from v2 to v3 put me off. This is not Kirby’s fault—not entirely anyway—but I took this opportuinty to dip my toes into Jekyll and Hugo. I settled with Hugo eventually because of its speed and because I found its ease of use to be on par with Jekyll.

I could find no way of hosting Hugo with my own host without a complex set-up that involved an FTP program. Especially since multiple people write on Physics Capsule, having a version control system was of paramount importance. At the very least we had to have some way of ensuring we were not undoing one another’s changes.

Enter GitHub

I turned to the trusty old GitHub again, realising that, having once had a taste of deployment from our personal machines, none of us wanted to go back to a browser-based website with a clunky dashboard. Indeed I wanted to stay off the browser entirely myself, which meant my usual GitHub habits had to be pushed aside and the console had to come into play.

Installing Git and generating SSH keys

Installing Git was easy. The official website offers a handy package file to install Git on any operating system. Generating SSH keys was a bit more complicated. Below I reproduce a record for posterity (and we all know precisely how long that is in technology years). First, fire up Terminal.app and check if you have an .ssh directory at home with the following command:

cd ~/.ssh

If such a directory exists it should change to that directory and all should be well. If not, Terminal will tell you that no such file or directory exists so you will have to create it yourself. Do that with

mkdir ~/.ssh

then press return and follow it up with

chmod 700 ~/.ssh

And that should have you set up and ready to go. Next, we need to create your RSA code. This is a randomised alphanumeric snippet that will prove to GitHub that you are in fact who you say you are. Naturally, it will be associated with whatever e-mail is associated with your GitHub account. First cd ~/.ssh and then make sure to enter your e-mail address while you copy and paste the command below:

ssh-keygen -t rsa -C "your-github-email@something.com"

At this point if Terminal asks you which file you want to save the key in just hit Return and it will save using standard defaults. It will also ask for a passphrase, you can give it if you like for added security. At this point you should be able to get an Agent ID if you do

eval "$(ssh-agent -s)"

which will confirm that your RSA code has been generated and is ready to use. Also, as a friendly little tip, if you set a passphrase earlier, you can

ssh-add -K ~/.ssh/id_rsa

you can save your RSA ID passphrase to your iCloud keychain for safekeeping and later reference.

So much for your local machine. If Github needs to associate this machine as a trusted source of code pushes to be associated with you, it needs to know this RSA ID. The trouble with an RSA code is that it is quite sensitive. An extra space or line break (or their lack) will simply mean a different code altogether. The safest way to go about letting GitHub know your RSA ID is to copy it via Terminal:

pbcopy < ~/.ssh/id_rsa.pub

Now head to GitHub, log into your account, click on your profile picture (top-right corner), head to Settings → SSH and GPG keys → New SSH key and paste the copied key with the usual +V shortcut. Save it with your device name, or something you can quickly identify as your device, and you are ready to Git.

General Git workflow

With Hugo, node.js and Git installed, and with our Github SSH keys in place, we were ready to deploy Physics Capsule. The public files had already been built, and the site design had been finalised (at least for a version one deploy). We set up an organisation account with GitHub, with physicscapsule.com as a repository, having decided to go fully open source. The final step involved cloning the repo onto our local machines and we were all set, both on GitHub and on our Macs.

Next we had to set up edge serving with Netlify. Getting that set up was easy. I simply updated necessary records with our domain registrar to point to Netlify and set up a custom domain that went live (i.e. the records propagated) within a day. Netlify also took care of our SSL licenses for us.

With continuous deployment, our workflow was simplified to writing articles (and updating core site files once in a while) and pushing changes to our GitHub repo. That meant we never had to open up our browser, stay connected to the internet, log in anywhere, save our work online etc. In fact, I am typing this right now completely offline, on Ulysses, my writing app of choice. Once I am done I will connect to the web when I can, then pull and --rebase my local version to update it with the master branch (this is simply a safety in the interest of version safety), save the markdown file into my cloned repo, and push changes. Netlify will take things forward thereafter.

In practise, once an article is ready, this entire workflow takes less than a couple of minutes—including being able to view changes on the live site. As a cherry on the cake, using Hugo makes this superfast, extremely stable and quite extensible. Plus hugo server --watch -D makes a copy of the website available on our local server with instant updates and no refreshing which makes it easy to make changes and quickly view them on the live site too.

On leaving Kirby etc.

For a few weeks now my wife has been asking me why I have been playing around with my website design so much. She was not alone: eight readers contacted me over the past month enquiring about the same thing. The answer is everything above. All that I said for Physics Capsule applies for this website too. You are reading this article right now on a static site built with Hugo, hosted on a Git repo and served by Netlify.

The reason I made this move was two-fold: first, unlike before, my work was demanding enough time that I could not dedicate a few hours every month towards maintaining the site, whether for security or for functionality or for design; second, during the past eleven years that this website has been functioning I have moved only once (from Wordpress to Kirby) and my previous set-up with Kirby 2 was great except that with the launch of Kirby 3 I had to purchase a new license to upgrade and that turned out to be nearly four times costlier than my Kirby 2 license—additionally, staying with Kirby 2 was not a future-proof option since, somewhat rudely I find, all Kirby 2 docs were pulled and replaced with Kirby 3 docs. If I ever ran into an issue I would have no references or support except perhaps from the active community on Kirby.

Content lock-in can be a nightmare. In the interest of keeping things simple and future-proof I did not want to compromise on using pure Markdown for writing articles.

The community on Hugo is just as active if not more so. Building sites while different1 shared several core beliefs that made it easy to design this from scratch. The many designs I had been exploring the last couple of months before this helped me finally arrive at my current design. I realised that I wanted a mature set-up that could take fairly good care of itself without my constant attention, and one on which wiritng was as easy as writing any longform content today on my computer. I also realised that my priorities with respect to design and function too had changed.

For starters, any design had to be functional and minimal. I have always rooted for something minimal but this time round, embracing the JAMStack, I wanted my code to be minimal as well: fetch few external assets, fetch nothing unnecessary, defer and async as needed, focus on speed and simplicity. At the top of my list, for functionality, was a dark mode. You can click the adjustment icon in the menu above to switch between a bright (daytime) and dark (nighttime) mode. These have been carefully designed to look as beautiful as possible. Coupled with Minion2 and Bernina Sans, the rich palette and sprinkles of javascript magic make this website something I can be happy about, something that is simple and straightforward and easy to manage; something that lets me truly focus on my content while making pushing commits laughably easy.

Speaking of content, a sad lesson I learnt with Wordpress and Kirby is that content lock-in can be a nightmare. Sure, Wordpress exports files but it does so in a human-unreadable .xml format. Kirby does some exporting in that its content files can simply be copied elsewhere but it relies on proprietary Kirbytags for a lot of functionality. It does have a panel (via a browser) and a CLI but the latter, as of the time of writing this article, is lagging behind with no support in Kirby 33. Therefore, simple copy–pasting to export Kirby articles does not always do. In the interest of keeping things (once again) simple and future-proof I did not want to compromise on using pure Markdown for writing articles. Wordpress too came up with a solution for this in Gutenberg but the editor feels rather half-baked to me right now, even in Wordpress 5.0+. Plus, Markdown is not a standard Wordpress feature; you need Jetpack for that.

Hugo on the other hand supports Markdown in conjunction with the Blackfriday Markdown parser for Go. No extra set-up is needed, not even a couple of lines in the config.yaml file like Kirby’s config.php and Markdown Extra. Hugo even supports footnotes out of the box, as you may have noticed on this page, and these are standard Markdown Extra syntax footnotes, not custom shortcodes. That said, Hugo does support building some standard shortcodes and custom shortcodes; it is simply that I would rather not use them. All-in-all, what Hugo offers is a means of keeping my writing ‘rich’ without content lock-in, made possible by robust markdown support. It did mean I had to give up some fancier stuff in that I now have to type a div section manually rather than with fancy buttons like in Wordpress but this is a small price to pay for pull quotes that I use rarely.

There is something blissful about such a set-up as this, not the least of which is the knowledge that it is rock-solid (unlike Wordpress4), and that it is simple, easy to use, straightforward, future-proof with fairly good backwards compatibility (unlike Kirby). That is not to say that either Wordpress and Kirby are bad: I enjoyed my Kirby site while it lasted, and I see that Wordpress has its perks when building sites for clients. Neither are for me—at least not right now.

The simplicity afforded by Hugo, the speed of my workflow, the ease of setting things up, the security offered, the robust asset pipeline, the absence of any form of lock-in, and the convenience of managing my website on a daily basis all made this move to Hugo reasonable for me despite minor additional tasks along the way, like transferring select data over. If you think I am the only one jumping platforms, think again: Eric Bernhardsson has a nice little contingency plot that shows just how common this practise is today. And guess what that plot says: Go is, in all likelihood, the future of programming such stuff as static sites. Wonderful.


  1. Kirby uses the latest version of the old horse php whereas Hugo uses the more modern Go. ↩︎

  2. Minion is downright my favourite typeface of all time—even my wedding invitations and envelopes were typeset in Minion when I designed them last year. ↩︎

  3. Kirby argues that working on the CLI takes away focus from the CMS itself. Hugo, on the other hand, embraces the command line beautifully. Perhaps these are two separate, equally valid routes; it is just that I prefer the latter. ↩︎

  4. On Wordpress any plugin could break your entire website and you would not even have a method of identifying the culprit easily. ↩︎

What happens when our virtual selves take us over

One need hardly be a crusader against modern technology to realise that, like any tool, it has its good side and its bad. The trouble is that far too few of us are ready to acknowledge and come to terms with this fact.

In Ancient Greece there once lived a young boy whose handsomeness was dazzling. He was, however, blissfully unaware of it. At some point in his life a young nymph came to him and expressed her love, but this young lad dismissed her unabashedly and went about his day’s work. Apparently this severely displeased the Grecian gods and they decided to teach the fellow a lesson. They decided that he had lived in ignorance long enough and that it was time he realised his own handsomeness. That evening found him at a spring where he happened to bend down and catch a glimpse of himself in the water. For the first time ever, he saw his own reflection and was stunned; he was so stunned, in fact, that he fell in love with his reflection and began to pine for this ‘other person’. Like the nymph he would never win his heart; unlike the nymph our young boy would go on to die for this. The boy’s name was Narcissus.

I

There are far too many inconsistencies with this tale of Narcissus, not the least of which is its truth. Did such a boy exist? Did the Grecian gods have nothing better to do than focus on the love story of a random kid? Was this just an old wives’ tale designed to make a point about what is socially acceptable? Nonetheless it did not prevent Freud from using Narcissus’s name for a disorder that most of us are familiar with today: narcissism, where one feels a vain, exaggerated recognition of ones own importance often paired with a helpless desire to be admired.

There is more to Narcissus than Freud’s usage betrays. The myth continues describing how a plant bloomed where Narcissus fell to his death. This is the amaryllis plant that shares its name with the Greek lad. It is known for its narcotic numbing effect. In Greek narco means numbness. It was Narcissus’s lack of realisation, his numbness, so to speak, that led to his death. And narcissism might just as well be interpreted as a numbness one feels towards one’s self that causes them to blow up their own importance eventually losing track of who they are and possibly even beginning to desire to be someone else.

I admit perhaps I am letting myself wander at this point—the last thing I would want is to get into a losing battle with a psychologist. If I am, in fact, wandering rest assured that it is with good intention. Today’s world is slowly redesigning itself to normalise a certain degree of narcissism that would have been frowned upon only decades ago. And social media has played no small part in bringing about this change.

None of this is to claim that social media breeds narcissism or that it makes a narcissist where none existed. But it does actively bring to the surface that hovering bit of narcissism that lies dormant in us all—whether we like to admit it or not. What it then does is normalise it because social networks are designed to feed on this. To blame this entirely on the social web too would be wrong: it does help us in some other ways after all; the question is whether the tradeoff is worth it and it rarely is.

The social web is built to enable transfer of information. But information can be transferred only so long as someone is out there seeking it. And someone will only seek it when they believe, on some level, that they are likely to find it there. That is to say, the model atop which the social web is built is to see what information someone is looking for and to place that before them. But, in a characteristic fashion, modern technology has gone one step further. It now attempts to understand seekers well enough to be prepared with what they are likely to seek. Going further still, having understood what someone might be interested in, the social web simply shoves that in their face—targeted advertising—in the hope that at least a handful in a crowd of hundred pursue it further. All of this translates to money.

So if knowing us really well is what will drive the social web towards success and unimagined profits, what better motivation exists for social networks to want to make us share more about ourselves and our lives?

II

The human mind adapts and manipulates in equal measure. It is inherently biased in all its observations. The incentive-and-rewards system designed by the social web plays the mind slyly and carefully: it makes us want to share by tapping into our social instinct and it rewards us with responses from others, highlighting it for no obvious reason throughout our day. A little notification here and there that makes us rush to our phones is really a pitiable reward system in play wholly designed to benefit the platform serving those notifications. Added to all this, notifying creates a sense of urgency.

This also comes down to a numbers game. The only real way to ‘grow’ on social media—whatever that means—is to participate with consistency. ‘Likes’ and ‘Favourites’ and other such statistics do not mean a lot to everyone. Those to whom these numbers do make a difference are already within the platform and will work on staying there. It is those to whom such numbers are not of consequence that platforms have to work on retaining. And this is done using equivocations like ‘engagement’: the number of times people saw your updates, the number of people wanting to follow you, the number of people actually following you and so on. All of this comes down to cold, hard numbers. How many people would still keep sharing on social media if they knew nobody would ever see their posts?

We dress up events so often that we have slowly begun to lose our sense of reality itself, let alone the event.

Add this all up and you find yourself in a system carefully designed to pull you in and keep you in and make it as hard as possible to leave. No doubt a person can simply choose to quit and stay that way (I have done it myself) but just how representative of the population is this practice? The average social media user has anonymous private accounts, notifications turned on for all platforms, connects to the web as often as possible and has a constant fear of, one, missing out, and two, needing some entertainment to keep themselves engaged.

To want to remain engaged in a world where attention deficits are increasing might seem counterintuitive but it is not: the engagement that social media provides is by nature designed for attention deficiency. Everything is bite-sized so you can spend five minutes on something before heading out to the next entertainer with a faux sense of having gained ‘new information’ along the way. By contrast reading a book demands days of continual attention.

What is this attention deficit doing, though? Why does it matter and why is it important? The reducing attention span plaguing a lot of today’s population is meant to take your mind off something much more sinister. Users are slowly being numbed to the fact that their presence on the social web is not one where they exist but rather where they are constantly and deliberately curating a version of themselves to showcase before the world. Every time someone looks at the social web they are looking not at themselves but at their reflection in a spring. As more time passes in this the chance of a user recognising this distinction reduces. The social web becomes a modern-day retelling of Narcissus’s myth.

III

If all this repeatedly reads like a dramatic cry against social media the reader will have to make a conscious effort to keep in mind that it is nothing of the sort. As said already, social media neither breeds narcissism nor makes a narcissist where none existed. Tools are rarely to blame especially when they have several valid positive uses too. The fault does not lie in social media at all; the fault lies in us.

How often have we seen someone enjoying a little moment in their day only to be swept away by the urge to share it online? Speaking as someone who almost never shares everyday moments on social networks, there is a surprisingly vivd mask that gets drawn across people’s faces—perhaps unintentionally, perhaps by habit—as they morph from themselves, who were enjoying the moment, into the their virtual selves who are ready to pose and photograph (or realise in some other fashion) that visualisation of events which they would like to put up online.

Everybody dresses real life up and it is not a new practice. Specific photographs taken way back in the 60s too suggest people loved setting things up before making a picture. The difference is, back then this was an occasional activity; now we dress up an event so often that we have slowly begun to lose our sense of reality itself, let alone the event. Pete Nicholson puts it quite eloquently—

I find myself enjoying a fun or interesting or strange thing and then, at a certain point, as if some invisible switch were flipped, I suddenly notice myself wondering the best way to communicate the moment to other people, typically via something you can do on a smartphone. Invariably, when I attempt to return to the moment, it’s gone.

The trick is to balance things. One could share later rather than now so nobody focusses on making pictures to share while the moment is underway—we just need to make pictures if we feel like it and later share pictures if we have any. And if we have none there is no need to share it.

However it is not just pictures that are the culprits. The social web has given everybody a soapbox to shout from. The trouble is that no-one is listening. It becomes important then to realise that not every thought we have needs to be broadcast on social media. Some can simply be kept to ourselves.

What we need today are what the journalist and author William Powers calls ‘Walden zones’, places around our home and work where devices are banned. He also points out the clever idea to have long moments of disconnection between successive use of our social media.You can read about William Powers’s book Hamlet’s blackberry on my bookshelf. This is key to ensuring we can keep our shiny new toys—perhaps even that we have earned them—without experiencing any adverse effects on our lives.

But there is always the elephant in the room: Do we have it in us to develop such discipline? Do we have it in us to set up Walden zones and stand by them? Do we have it in us to keep track of our connected lives and rein ourselves in from time to time? After all this is something Narcissus could not do. For my own part this has not been hard which is what gives me hope that anyone else can do it too if they, firstly, acknowledge the issue and, secondly, make a sincere attempt—both of which are easier said than done. Ironically enough I have had some additional assistance of late from my iPhone which, with iOS 12, tells me how much I used my phone every day and even compiles a report every week. Like most graphs it is insightful and at times unusually helpful, and I have been making some progress on that front as well.

Our virtual selves, our faux reflections, ought not subtly run our real lives. But they are doing as much today. Despite the advent of technology, which will only ever increase in the coming decades, humanity is not about to disappear; human interactions are not about to be replaced except for our own downfall; and if we proceed as we have been in the past our virtual selves will not stop trying to take us over any time soon.

Hamlet’s Blackberry

A timely philosophical reflection on the digital influences in our daily lives and how we can harmonise with them.

Although I read this book four years ago I was actually reading it four years after the book was first published. In the digital age that is a lifetime: between the book’s publishing and now nearly five million startups have come up, most have died, and nearly all of them had been madly vying for our attention. This madness is yet to die, which is what still keeps Mr Powers’s book relevant. That, reinforced by my insistence that the social media of today can potentially induce ADHD but I digress.

Unlike a lot of books that deal with these topics Hamlet’s blackberry is not driven by a hatred for technology or by glorious calls for abstinence. This is precisely what makes a reader like myself take Mr Powers seriously. Rather than citing examples of recent times, of people walking into lampposts or diving under trucks, lost on their phones, he talks about how our concern for modern technology is not entirely well-founded. Or at least has had predecessors. We have always been worried about the new but have turned out just fine as a society.

A lot of us are feeling tapped out, hungry for some time away from the crowd. Life in the digital room would be saner and more fulfilling if we knew how to leave it now and then.

It is as individuals, though, that we need to look into ourselves and Hamlet’s blackberry talks about this beautifully. Right at the start of the book Mr Powers talks about visiting his mother and how he could call her up and let her know he was running late. Technology made that possible with great ease: without it he would have either had no way of getting in touch with her or would have had to look for more time-consuming means, like a payphone. So technology is good he says, but focusses entirely on what happen after he keeps his phone down. He loses himself momentarily in memories of his mother making it appear like technology brought him closer. Perhaps it did, but the circumstances are just as important he points out. Had he called his mother and, soon after, checked his tweets or his Facebook timeline he would not have experienced the same bliss. With that he establishes a thread that stitches the foundation of his book.

Using technology can be a good thing but ensuring we have ample pockets of independent time between successive uses of technology is extremely important to keep our life well-balanced. It would, he says, ‘be saner and more fulfilling if we knew how to leave [the digital world] now and then.’

He proposes Walden zones (inspired by Thoreau) and speaks of how his own family takes the weekends off the web. Moments of disconnect like this not only keep us alertly in the present but also enrich those planned, intentional moments during which we actually do connect.

What sets this book apart is its philosophical approach. This is also what will ensure this book remains meaningful for years to come. Rather than offering absolute solutions that might lose validity with time Mr Powers explores the philosophies that ought to drive us to control our use of technology and help us reign ourselves in. This alone makes Hamlet’s blackberry worth having on our reading list.

macOS for physicists

On why physicists love MacBooks; or, the complete guide to why you should opt for a MacBook if you’re into science.

It is no secret that MacBooks have long been a popular choice of personal computers among physicists. When the cosmologist Hitoshi Murayama wrote, almost ten years ago, what was perhaps the first iteration of the sort of article you are currently reading, Macs had already become a favourite across several laboratories and physics departments around the world. His article was later updated by others, most notably by the incognito physicist going by the name ‘Flip Tomato’ (who also wrote this wonderful piece) and, more specifically for astrophysicists, by the cosmologist Edd Edmondson.

The last such article was written back in 2013 during the days when Mac OS X had just been overhauled and its name had been shortened to OS X. Today we have macOS which, like OS X, is a considerable overhaul of Apple’s wonderful operating system in much more than name. However, no updated overview or guide for physicists getting started with (a new) Mac exists yet. This article is intended to serve precisely that purpose. Everything here has been tested on the current version of macOS, High Sierra.

Why use macOS?

Most of this is common knowledge but here is a rundown anyway. Windows is a bucketload of custom, proprietary code. This is a crude way to put it but it is what it is and Windows is ridden with problems with smooth operation and reliability. (I know this statement is bound to polarise opinions so if you have had a good experience with Windows, more power to you. Nobody I know has had the same, unfortunately.) By contrast macOS is built atom the XNU kernel (which also powers iOS, tvOS etc.) which in turn carries with it UNIX facilities. This makes a huge difference: the XNU kernel that runs macOS, the UNIX framework, and the operating system called Darwin which is based on these codes are all freely available.

The end result is that UNIX is something physicists have been considerably familiar with over several years and macOS coupled that with the Darwin Graphic User Interface, called Finder, to provide the best of both worlds: in macOS we have a smooth operating system with great visual character and efficiently designed functionality with the UNIX CLI (via Terminal.app) available for programming use being a single click away. Of course all this is to say nothing of the unmatched hardware Apple is known for.

Popular physicist Brian Cox, a longtime fan of Macs, says, ‘When you look around a physics conference now you see more Macs than anything else … I think that’s because they’re essentially UNIX … There’s a huge code base. We’re still using programs written in Fortran quite a lot—programs that were written in the ’70s and ’80s—and they compile directly on the Mac. It’s very easy to do, as opposed to Windows, where it’s just a pain to compile all the old legacy programs.’

Get started with Xcode

Earlier you could install OS X from a DVD but those days are behind us now. macOS gets annual major upgrades with minor updates almost every month, all for free and all over-the-air. Almost all new Macs come with a recent version of the operating system installed. This is convenient but takes away some control at the installation stage that might have otherwise been appreciated. However the solution is not more complicated by any measure.

The first tool to install is Apple’s own Xcode. It is available for free from the App Store. Xcode is an IDE and carries a bunch of files you need to make your mac programming-aware. It loads headers, libraries and compilers you need for most languages you will need—C, C++, Objective-C, JAVA, Python, Ruby, Cocoa, Apple’s own Swift and much more—all in one application. If you want to use Pascal and Perl, for example, free third-party add-ons to Xcode can make it happen in no time.

As a last step after installing Xcode open terminal.app and say sudo xcodebuild -license and accept the Xcode license.

TeX

Your next stop will be setting up your Mac for TeX. There is a quick install possible with a lightweight version of TeX, including only select packages, called MacTex Basic and although you may be tempted to install that its lack of packages will eventually prove to be a pain in the neck. Choose instead to download the entire TeX library via the complete MacTeX package. It has been packed as a .pkg for macOS and will install everything you need and create an easily accessible TeX tree. To locate your tree simply kpsewhich -var-value TEXMFHOME on a terminal.app window.

Once you have your TeX installation ready make sure you keep it updated via tlmgr update --self through the Terminal. You can also use something with a GUI like TeX Live Utility to keep your TeX tree updated with CTAN. If you are not comfortable with the terminal, or if you are looking for efficiency, there are several front-ends available for TeX with great GUIs. TeX Shop is probably your best free option while Texpad is the one I personally use (Texpad costs money but is worth every penny. It even lets me, via Texpad connect, sync tex files across my Macs and iPad to work while on the go.) TeX Shop seems rough around the edges but works great; Texpad will be more in line with your Mac experience: ‘we’ve designed the UI to meet high expectations of Mac users’ says their website.

Get a package manager

Physics-y stuff is helped a great deal by package managers. In Prof Murayama’s original article he recommends Fink. I doubt anyone really prefers this anymore. (A lot has changed since Prof Murayama’s article was first published. This is a particularly telling sentence: ‘The Xcode is big, though, weighing more than 800MB. Don’t attempt a download without a broadband.’ Today Xcode is around 6 GB in size.) MacPorts and Homebrew are much better alternatives. Homebrew is my personal choice. Of course Fink is the oldest so there may be many who prefer it merely by habit but you can hardly go wrong with either of these.

MacPorts comes with a straightforward .pkg installer. Installing Homebrew is no less straightforward but perhaps considerably more fun. Open terminal.app and say /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)". Some interesting packages to install are imagemagick, ghostscript and, if you like, xdvi. Most GUI-based document and image handling, though, is built into the incredibly powerful Preview.app on macOS.

Emacs or VIM

Some may argue that I was hasty in jumping from Xcode to TeX to a package manager without ever talking about VIM or another Emacs alternative (and they may be correct) but rest assured that there is a reason for it: Macs come with Emacs built in so, to be honest, you really have nothing to do here. That said, if for some reason—say habit—you are in search of a more familiar interface you can always opt to use VIM which also comes pre-installed. Simply say vim at the terminal prompt.

But we now come to the catch. Emacs that comes with your Mac does not support X11 which any physics student will have encountered as part of a computing laboratory course with gnuplot or some equivalent graphing software. (Sometimes I wonder if x11 is the only thing common among all physics courses worldwide.) That is why I led with a package manager before talking about Emacs.

x11, gnuplot and such

I wrote a guide some years ago when I was a student myself that, for my excitement, seems to have been linked to from a couple of universities. I will save myself the trouble of re-writing that but in the old article you can find four steps to install gnuplot that also installs x11 and any other terminal you may like. These instructions use Homebrew.

According to Prof Murayama’s article x11 used to be available straight from the OS X installation disc. I know of no such arrangement available in the current OTA update cycles of macOS. The steps in the article linked to above are your best bet to get set-up with x11 on your Mac.

macOS Server

Do you and a colleague have a Mac? Is your colleague not using their Mac for high-intensity activity at the moment? Do your eyes water when you see all their CPU cycles going waste? Fear not, you can harvest their CPU power by hooking up all your Macs to the Xgrid cluster.

In all seriousness though, if you have kept up with past macOS updates you will know that I led with a joke: Xgrid is dead. Apple now offers macOS server instead and for $20 from the App Store you can harness the power of multiple Macs in your department or laboratory and set-up a joint workgroup, share data locally, set up a mail server, a DNS and what-have-you, all from the comfort of your office. With enough Macs you can have your own multi-CPU supercomputer.

You can also use it for simpler tasks besides analysing particle beams. CERN, for instance, allows installation of several institution-level applications (LabVIEW, hpglview, Mathematica, Octave, Matlab and more) right on its local macOS Server cluster (both general purpose and mathematical rather than from the cloud.

If your Mac is strictly for personal use you need not worry about macOS Server.

Some nerdy packages

Besides computational and condensed matter physicists, astrophysicists and particle physicists port packages most often. This world is therefore filled with packages for such fields. If you followed instructions to install Homebrew above you will enjoy David C. Hall’s HEP. While you are cloning repositories off Github you might be interested in my LaTeX class for lecture notes (that comes with Italian translations thanks to S. Maggiolo).

Also try IDL and IRAF if you are into astrophysics. Since COMSOL and other such applications are costly for mortal individuals you can test out alternatives like FEniCS or MOOSE (warning: installation procedure is not for the faint-hearted) that help accomplish similar end goals.

Finally this somewhat old-looking but still valid webpage contains lots of applications and data for High Performance Computing on macOS. You may not have to follow up with many alternatives since the page seems to have been updated at least as far as OS X Sierra.

The ‘usual’ stuff

Although it is fancy to pwd your way from time-to-time with the Terminal and open -a "application" ~/path/to/file instead of using the GUI like puny normal human beings that is not how most of us use our personal computers. We use it for checking our e-mails and calendars and notes and reminders and for typing up reports and articles and making presentations and what not. For all these you need nothing fancy; all macOS installations come with Pages, Keynote, Numbers, Mail, Reminders, Notes, Calendar, Preview, Grapher, Dictionary, Contacts, Time Machine, Keychain and, of course, the all-powerful Textedit among other applications. Resist the urge to download alternative apps. For most of us, even though we may not like to admit it, these stock apps work just fine.

Lastly, as a physicist I must mention gaming. Prof Murayama says of ‘usual’ apps on a Mac, ‘For presentations, Apple’s Keynote is rapidly gaining popularity. It allows PDF graphics without losing scalability, has cool transitions. Steve Jobs himself uses it for MacWorld keynote addresses … I haven’t had much problem finding good applications for research purposes. People complain that there are still less games available for Macintosh, that may be a good thing. ^_^’

Today there are thousands of games for Macs, both on the App Store and on platforms like Steam. My lazy pastime is Civilisation VI. For a quick fix my friend and colleague introduced me to Real Boxing. For leisure unrelated to gaming Handbreak is good for DVD/CD authoring, Cyberduck for FTP, and my main writing app of choice is Ulysses.

Finally, for a bit of nostalgia, if you grew up in the 80s and 90s playing Dos games try Dosbox or, better yet, Boxer to emulate a Dos environment on your Mac. If you need game suggestions my favourites are Dave (where you play Dave), Prince (the predecessor to the modern Prince of Persia series), Skyroads (roads in the sky, what more can I say?) and Biomenace (shooting up alien and saving hostages across cities, forests and laboratories). That is enough nostalgia for the day.

Have fun with your Mac, fellow physicist.

Technology these days has been making incessant attempts at learning our likes, dislikes, wants, needs, desires and tastes in a bid to be ‘smart’ and in the name of efficiency and helpfulness. It is becoming the personal secretary nobody asked for. Ever since this craze started the face of this attempt, for better or worse, for the average internet user, has been Google (the search engine, not the company itself).

Why technology wants to be clever as opposed to easy-to-use is beyond me. If something is easy enough to use we would hardly want an assistant to do the job for us. Even going by the idea that four hands are better than two one must realise that in the real world the additional pair of hands rarely comes without a brain attached; and this makes all the difference. In being a clever assistant most technology is becoming an annoying, attention-seeking one; it is becoming precisely the kind of assistant you would want to fire.

When cleverness is like a poor beta feature

My displeasure with such programmes has long been laid out in the form of two articles. One was a direct argument (I have since taken the article down choosing, for personal reasons, not to migrate it to the current tenth anniversary of this website) where I wrote about why I picked Airmail over Spark, Email, Inbox and other such apps to hardly my e-mails because the ‘smart’ approach the latter apps use simply got in my way and messed up my otherwise straightforward e-mail workflow (see postscript below). Except for obvious ones like bookings and newsletters most apps always trip in categorising e-mails. And e-mail is not a domain where tripping makes things easy.

Some e-mail apps reorganised my inbox snatching control from my hands while others used weird snoozing tricks. I have since left Airmail too thanks to its adamant folder structuring practice that keeps its folders disparate from my existing ones and ends up duplicating folders as a result. Again, this is a case of an app snatching control and deciding for you. All this, of course, is to say nothing of the privacy concerns that come with letting apps store your login credentials in their own servers. I have since returned to the simple and straightforward mail.app on both iOS and macOS.

Back when Google came into existence it was not unlike a directory of websites catalogued under various suitable topics. If you searched for certain keywords, therefore, and if someone else too searched for the same keywords you would both get the same results. This made—and still makes—a lot of sense. If you went into a library today you and anyone else looking for books on a certain topic would likely both come across the same set of books, give or take a couple.

Also, there was healthy competition in those early days. The effectiveness of searches was directly proportional to how much of the web had been crawled. This was like a larger library being potentially more likely to provide better resources for you when compared to a smaller one. Somewhere around then Google went from being a search engine to your personal portal to the web, meant exclusively to please you. And that is when the trouble began.

Google, your personal silo

Imagine a telephone directory that decided if you needed to see someone’s phone number or not. In much the same manner as soon as Google started tweaking its search results its core purpose took the back seat. Instead of giving you results most relevant to what you wanted it would now give you also consider your previous searches and (what it perceives as) you interests to fine-tune the search results you get.

Who is to say they did not go further and make arrangements to tune results in the name of political correctness, decency and other subjective metrics? This amounts to censorship. A searcher may not exactly be right-leaning but when he searches for something he ought to get the whole picture, whether left, right, centre or whatever else, political, scientific, economic, environmental, industrial etc.

The more deeply we believe in a certain outlook the less likely we are to appreciate opposing perspectives.

In almost all cases people search for something because they do not know the answer. It then becomes the moral responsibility of whoever provides the answer (or a path to it) to do so without bias. Because explaining with bias, however deep or shallow, is hardly different from propaganda.

The end result of all this is that Google goes from being a neutral search engine—there should be no other kind of search engine but I digress—to an ideologically-driven one giving you answers it knows you expect so that you are under the impression that it is giving you all the right answers. After all who is not eager to a fault when it comes to believing themselves?

Such silos are dangerous for society and I had said as much in one of my older articles. They deepen the divide and make people less open the possibility that they are wrong. Left or right, believer in extreme political correctness or not, believer in a free market or not, racist or not, sexist or not, everyone has something to learn about their stance and about their stubbornness in keeping their stance. And the more we work with platforms that echo our beliefs the more likely we are to delude ourselves that our beliefs are correct and the more likely we are to start believing more deeply. The more deeply we believe in a certain outlook the less likely we are to appreciate opposing perspectives and change our stance.

There are technical challenges too

The problem with Google today is not one of mindset or approach alone. Google’s synonymity with the web itself has given it a monopoly on how people interact with the near-infinite data available to them. This means, somewhat fearfully, what people read, what they watch and what they listen to. In turn it means Google gets to decide what people believe in, what they discuss, what conversations start and what conversations die, what trends rise and what trends fall and a lot more that will shape our society tomorrow.

This is an incredibly powerful position and by doing something as simple as making a task hard Google can guide us towards something it prefers. The human mind is a curious one: drunk with information we will stagger away from a path when the going gets tough and settle for absolutely any other means of consuming more information.

With Accelerated Mobile Pages (AMP) the search engine giant has made some websites more preferable than others to its algorithm. Perhaps AMP will improve but in its current state it renders parts of websites unusable and, to add insult to injury, there is no way you can turn off that abomination. If you want to use Google, you will have to tolerate AMP. And therein lies the problem: not only is Google deciding for us, it is also showing people its own half-baked version of every ‘AMP-powered’ website on its search results and crippling the image of those websites in turn.

If the web is not helping you at the end of the day, if it does not appear to be working in your best interests, you are better off without it.

Giving the company the benefit of the doubt (which it probably does not deserve) let us suppose that AMP will get corrected over time. Yet Google has other problems just as bad to deal with. For example, following a tussle with Getty, to set things straight the company ended up chopping off one of its own limbs: it pulled arguably the most useful feature of its image search capability. Google no longer allows you to open up an image with one click.

First of all, as a mediator, there was no reason for Google to do this. If websites prefer not to be hot linked they can choose to prevent it by themselves with a simple line of code in their robots.txt so that clicking to view an image directly via Google will display an error or, better still, redirect to the page on the website where that image has been used. That the average internet user does not have knowledge of this is what allows Google to exploit it; by simply removing that feature Google now completely changes how people search for images online and forces them to visit the website hosting it.

There is always an alternative

The other problem—as with any monopoly—is that most people seem blissfully unaware of alternatives to Google. On iOS, for example, there is an option to change your default search engine. The same is true on Mac[^ On iOS head to Settings → Safari → Search engine. And on macOS look under Preferences in the Safari menu → Search → Search engine.]. Microsoft will undoubtedly have a similar option not least because they have their own search engine Bing. Android too, despite being a Google product, is known enough for its customisation that it likely has an option to switch your default search engine.

Go ahead and pick DuckDuckGo. If you are in the US consider taking advantage of the Bing reward points system that lets you exchange usage times for gift cards. At least explore other search engines without blindly turning to Google. You will likely find that the search results they provide might be superior in some ways since, especially in the case of DuckDuckGo—my personal choice—there is a lot of respect for privacy and you get neutral searches.

Of course the downside of all this is that you may miss the accuracy of searches (read, echoes in your silo) that comes from Google. On the one hand you will eventually get used to looking at pages two to five of your search results; on the other you might spend less time wandering aimlessly on the web because there is little to distract you.

As with everything there are perks and falls—after all Google is dominating the field for a reason. But that should not let you lose control of how you search the web. Because if the web is not helping you at the end of the day, if it does not appear to be working in your best interests, you are better off without it. But if Google is subtly puppeteering your web habits, firmly discourage it. Nip it in the bud because the trouble with Google Search may, after all, have taken root only because we let it.


ps This is a note about my e-mail workflow for the curious. I employ a three-stage e-mail workflow. First, if an e-mail can be addressed within one minute I do so. Second, one of four things happens to all my other e-mails: if I need them for later reference I flag them and keep them in my inbox, if they are even mildly unimportant I delete them, if they are e-mails that are interesting but I have no use for immediately I archive them, if I need to address an e-mail but cannot do so within a minute I leave it in my inbox without a flag. Third, keeping in mind that all e-mails are now either deleted, archived, flagged or untouched, I deal with the last two groups by either archiving e-mails once I am done with them or leaving them untouched until they are dealt with. That way I know my inbox only has important e-mails that either need my attention or I will have need for soon. Lastly, I deal with newsletters and the like by adding select articles to my Safari reading list and deleting (or, rarely, archiving) the newsletter immediately.

On entropy, randomness and shuffling a playlist

Randomness is a surprisingly physical idea.

One of the many introductory Octave commands students are taught in our physics course is the rand() command. All this does is generate a random number; this number could be real so if you want an integer you can simply round() it off. This seemingly trivial capability is present in almost all computing languages1. But mentioning it in class recently brought back a recurring question to my mind: just how does a computer generate random numbers?

Think about this for a moment: if a computer could in fact generate a random number based purely on chance we would be in a rather dangerous world. The only reason you, as a human, can come up with a random number is because you have some sort of free will. A computer is, by definition, a dumb machine that follows instructions2 so it should never be able to think up something at random—unless it had free will or some equivalent skill of unbridled thinking. Randomness is proof of free thinking.

This was also the subject of a conversation with a colleague when we were wondering if, given an exhaustive playlist and ample time, we could determine the ‘randomness’ of our phones shuffling our music. After all this is simply a form of random number generation going from the 6th song to the 24th song to the 18th song et cetera using randomly generated numbers 6, 24, 18 and so on. But determinism and randomness are opposites; if we could determine the pattern our songs were playing after all it would mean that our devices were not truly shuffling something but were, instead, following a complex, layered—but periodic or otherwise identifiable—pattern in the process.

Getting started with pseudorandom numbers

The simplest way to generate a random set of numbers is via a cipher. Here is a little exercise I had tried out myself back when I was in school. Let us keep our limits3 at 1 and 10, so we need to generate a random number between these.

Start off with a top-secret substitution cipher; say we have something like 3–1–8–2–4–7–6–9–5. Use the familiar sequence of natural numbers to compare this with. So 1 is 3; 2 is 1; 3 is 8; 4 is 2; 5 is 4 and so on. Next, pick a seed: say the user inputs 8, then, according to the cipher, you output 9.

At this point you are one-to-one with the seed. You could choose to keep this up and require a seed every time but you can also become a little more independent by using your own output as the next seed. In our example 9 becomes the next seed so the subsequent outputs will be 5, which will be followed in suit by 4, 2, 1, 3 and 8.

This is good enough for a kid starting out at school but there are some obvious problems with this algorithm. For example it would, perhaps, take a user three tries to figure out the pattern here4. The randomness is instantly broken.

If you chose to go one-to-one a user could figure out your game simply by seeding the same number a minimum of twice and end up getting the same output both times. From then on it would only be a matter of seeding each of the remaining eight numbers and finding out their appropriate substitution.

At this point the user can game your system: if he wanted seven as his output he would simply seed six. This is only slightly more cumbersome than choosing the seventh song manually but it is outright pointless from the perspective of a random number generator.

Surprisingly enough this is exactly how random number generators work on a fundamental level. This is why we normally call these pseudorandom numbers; the numbers chosen thusly are random but not quite. The entire idea can now be re-stated as follows: if there are enough complexities in the number generation process to make the substitution hard to identify the program will appear, for all intents and purposes, like a generator of truly random numbers.

Boosting the complexity, part 1

The most efficient way to boost the complexity of our algorithm is to identify its weakest links and strengthen them. The first one is the length of the cipher5 but we will not dwell on this because a truly random number generator should work on any sets of limits, including something as small as one to ten.

While we are on the topic of lengthening our cipher itself, which is somewhat of a brute force attack, here is a quick and dirty way of thinking about how this could actually solve our ten-digit cipher problem. Every number can be reduced to a single digit number by simply adding up its digits. This is not real mathematics but so long as we can separate the digits of a number (weight them out by tens) you can add the digits up, e.g. 754 is simply 7 times 100 plus 5 times 10 plus 4 times one, so we ignore the weights of tens and add the multipliers to get 7+5+4=16 and repeat this process to get 7 in the end. Along the same lines if you have a billion-strong substitution cipher you can simply resort to choosing a number by the same means as above and adding its digits up over and over again till you get something within your range of interest (0–9 in our case). This is sufficiently random in the short run and will take users quite long to break (unless they have a computer).

So where do you really introduce complexity in all this? One of the problems with using a cipher is the secrecy involved (remember I said we need a ‘top-secret’ cipher—that was no joke). Of course you can never fully do away with this requirement but you can lessen the burden: instead of protecting a sequence of billion numbers drop the sequence and go for a formula instead. The great thing about this—besides the fact that it is simpler to handle a formula than a billion numbers—is that the thing need not make any sense whatsoever.

Say your formula is something like this: for a given number square it, subtract the previous output (or twelve if there is none), take the modulus, multiply by the previous output (or fourteen if there is none), compute its square-root and round it up to the nearest integer, then, if it is beyond a required range, keep adding up its digits repeatedly until you are within the range.

Of course the formula in this new method is complete gibberish but it does not matter. Nobody can figure it out6 unless you give away the formula to someone yourself. Why does the gibberish work, though? It works for the same reason why messy passwords that randomly combine upper and lower cases, numbers and symbols are safer than password123—randomness makes things harder to crack (this is not a universal rule, as xkcd once pointed out sometime back in 2011). Still, the whole idea is simple: you keep a human-randomised formula to generate random numbers so that there is greater randomness even if it is not absolute.

For the curious, the use of twelve and fourteen in the method above is to ensure the algorithm works on the first run. These numbers are chosen at random—you could have put in anything else there too—and from the second run it uses neither choosing, instead, to use the previous outputs as mentioned. The twelve and fourteen, in other words, are simply random fallbacks.

Boosting the complexity, part 2

There is a second method of making our number generator random. Notice that regardless of our generator we need a seed. Those are the only two ways of hacking into our approach: either figure out the cipher/formula or figure out the seed. Neither is a complete break-in by itself because you need both to predict the coming sequence of (pseudo)random numbers, but knowing either considerably reduces the effectiveness of the generator because if you know the seed you can game the first output and if you know the formula or cipher you can game all subsequent outputs.

The question then is how to strengthen the seed. If the user picks the seed there is no point discussing security or randomness anymore. The user just has to restart the system and use a brute force attack to figure out which seed gives them their desired output. Our ideal seed, therefore, should be something that changes every time the program starts and which does not require the user’s intervention.

Here is the most obvious solution to this: what number changes everyday, nay every fraction of every second? The time. If our program can take the current time and add up its digits it can make up its own seed. From here on you could choose to use the past output as the next seed (as we did before) or simply keep regenerating a seed because the time changes every second. The former is good enough until the cipher/formula is broken but is still the safer choice because if the user realises that the time is being used they can use a computer to start the program at exactly the right time to successfully game7 the system. For our daily use, and for shuffling musing in your car, this sort of random number generator will do. For one you probably have no intention of cracking the sequence so long as it is not obviously predictable; and secondly the stakes are not that high when your iPhone shuffles music.

Taking things further with entropy

To generate a nearly perfectly random number one can turn to physics8. Quite literally, you need the entropy of the physical world to help you out here. Purely as an example let us start with an outlandish idea9. The key is a radioactive source (one of the reasons I picked this example is because the students at our department—we started with them, remember?—perform an experiment where they use a radioactive source to show randomness as the deviation from a gaussian peak using a Geiger counter).

What we are getting at here is that if you used a radioactive source with a counter for preset time intervals you will end up with completely random counts of emitted radiation. With enough counts, say a hundred minute-long sessions, you will end up with a gaussian curve, so there is always going to be a number that is more likely than another but the probability of any single number being generated is perfectly random. You can take the count itself or you can take its deviation from the mean (the most likely value) but the point is you now have a perfectly random number generator. Use this as the seed for the random number generator on your computer and you have a pretty solid program. True, your cipher/formula is now the weakest link in the chain but if you can afford to simply generate random numbers based on the radioactive counts things should turn out just fine.

But of course not everyone has access to radioactive sources. But you can actually come up with a more everyday method. How you will feed this into your computer is secondary and is not something I will dwell on here; the physical source of the randomness is what we are currently interested in. If you flip a coin you will end up with either heads or tails. Let us call these one and zero bits respectively. Although the likelihood of either a one or a zero is half-and-half we know that whatever happens on your next toss is in fact completely random. You can speed up the process by tossing four coins simultaneously (again, how you do this is left to you—bring a friend along, perhaps) and your outputs, say H–T–T–H or 1–0–0–1 form a nibble. Toss eight coins and you have a byte. If necessary convert this binary number into a decimal and you have your seed. Around 2010 at the University of Hagen the same strategy was used but with flip flop circuitry to generate random numbers.

There are yet other ways, perhaps the most interesting of which is the lava lamp set-up that Cloudflare10 uses. Lava lamps generate bubbles randomly11. They constantly make pictures of these lamp set-ups and convert these pictures into streams of binary digits. Naturally no two pictures are ever going to be similar so eternal randomness is guaranteed.

You can think of innumerable examples12 of this: a pair of dice (or, better yet, a dozen of them) or skipping stones on water (the skip count is going to be random) or even crumpling a few pieces of paper and counting the number of crumples on each. These are just a few examples I can think of while I write this piece but the point should be clear by now: we need to mix the digital and the physical world13 for true randomness and anything short of the physical world results in pseudorandom sequences.

Choose any phenomenon in nature that involves randomness and use it as your seed digitally. For absolute security keep redoing this so your digital output is exactly as random as your physical one. And for efficiency with some security trade-offs use a cipher/formula based on a new physically generated seed every once in a while. Of course if the randomness has to do with just your iPhone shuffling songs weirdly, you can always choose to overlook the fact that every time you click shuffle on a certain playlist Daniel Boone starts singing ‘Beautiful Sunday’.


  1. At least in all languages I’ve come across. ↩︎

  2. That it can be given specific instructions to ultimately accomplish ‘smart’ tasks is a different issue altogether. ↩︎

  3. Following the publication of this article some of you wrote back to me wondering why limits were needed. To maintain generality you are correct that no limits are necessary; but you may need them in some situations: if you wanted to shuffle a playlist with 50 songs and the random number generator picked 137, what song would it play? In such a case having 1 and 50 as our limits to drive the program makes sense. ↩︎

  4. Of course there is nothing special about the number three: quick users will figure it out in two repetitions (the minimum) while slow ones will take many more (there is no upper limit). ↩︎

  5. Imagine the same algorithm as before but with a billion numbers. If you chained the generator to use past outputs as new seeds automatically it would take considerably longer for users to figure out the entire billion-strong cipher. ↩︎

  6. If there are fifteen possible mathematical operations (say) the permutation to pick seven out of those—which is the number of operations in our formula—is 217,945,728,000 possibilities. At the rate of one permutation per second this would take a computer over 6,000 years to figure out unless it gets incredibly lucky, so we are quite safe. ↩︎

  7. One might have wondered by now why I dwell so much on the importance of not being able to game the system. For starters the whole point of a random number generator is randomness; more important, these form the basis of a lot of technology these days on which the economy works, such as predicting stock markets, gambling and even encrypting your communications and payments on the internet. Being able to game these systems and predict the randomness can have devastating real world consequences. ↩︎

  8. At this point I am tempted to use the cringeworthy expression ‘duh’. ↩︎

  9. This is not so outlandish to be honest: Cloudflare’s Singapore office actually uses this method in real life to generate random encryption keys for computers. ↩︎

  10. Cloudflare is an excellent cached data edge server and I use their services for this website too. ↩︎

  11. As a physicist I feel I must point out here that, knowing the pressure and temperature conditions and the viscosity of the fluid inside the lava lamp, the bubbles are perfectly predictable. However the effort that goes into such a complex, evolving calculation is crazy enough that I doubt anyone would every bother with it. Also you would need to program this into a computer if you intend to keep up with the lava lamp. ↩︎

  12. For a more construction-worthy example take a look at Giorgio Vazzana’s random number generator built atop a chua circuit↩︎

  13. At least until we have free-thinking, sentient robots {laughs nervously}. ↩︎

Installing gnuplot on macOS

A quick—and popular, if Google is to be believed—guide to installing gnuplot on macOS.

Hardly anyone has gone through a college mathematics or physics course without meeting the wonderful gnuplot. However, it turns out that installing gnuplot (or Octave, for that matter — but let us leave that for another day) on a Mac is a pain in the neck. At a time when installing games take a two clicks, it simply is not straightforward to install gnuplot.

After scratching my head over it for two days straight, I finally installed gcc, gnuplot, Octave and LaTeX on my new Mac (OS X 10.10.3, Yosemite) and decided to note some points/instructions down here for anyone else looking for a simple solution from start to finish contained in one place.


Update, 6 December 2019 Since this article was written about five years ago Homebrew has changed its gnuplot installation system and does not allow picking terminals. This makes this a little inconvenient, not to mention useless. Thankfully MacPorts still supports a robust full gnuplot installation which makes me change my recommendation from ‘always install Homebrew’ to ‘always install both Homebrew and MacPorts’. Install MacPorts by picking a suitable package on their installation page. Then proceed as follows:

  1. Check what terminals are available (optional) with sudo port variant gnuplot
  2. Install gnuplot with required terminals e.g. sudo port install gnuplot +qt5 + x11 +aquaterm +wxwidgets will install gnuplot with QT5, x11, AquaTerm and WXT.

Click here to skip older updates and go to the main (legacy) article which runs you through XCode, GCC etc. Specifically, the above two steps replace steps 7 and 8 in the main article below.


Update, 9 August 2017 Since this article was written about two years ago a new version of gnuplot has been released, as have two new versions of macOS. A couple of my views have also changed as a result of this: I think AquaTerm is good enough for gnuplot and the decision to use X11 should only be a matter of specific needs or taste.

This article has turned out to be more popular than I hoped, with several universities and academic institutions sending lots of traffic towards it, all the way from these interesting lecture notes from Stony Brook University, NY, to this Chinese (I think) forum. In view of public demand, below is a small update.

If all you are interested in doing is to install gnuplot save yourself the trouble of scrolling down and follow these instructions (tested on macOS High Sierra) to install both AquaTerm and X11, omitting commands as necessary if you only wish to use one and not the other.

  1. Install homebrew via the Terminal with /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

  2. Install xquartz if you want X11 brew cask install xquartz

  3. Install aqua if you want AquaTerm brew install Caskroom/cask/aquaterm

  4. Install gnuplot (remove the --with-name for terminals you do not need brew install gnuplot --with-aquaterm --with-qt4 --with-x11

That is all. Enjoy plotting with gnuplot.


1. Install Xcode

If you have been trying to dodge your way out of Apple’s enormous 2.6GB Xcode app, get ready to face the demon: almost nothing related to coding will work on your Mac without Xcode and its libraries, so head to the App store and download Xcode.

2. Download command line tools

Download command line tools for Xcode. You can do this via the Terminal.app (Applications → Utilities → Terminal) with the following command (which you may copy and paste): xcode-select --install

This gives you a message saying command line tools have not been installed and will offer to install them for you now; proceed and install them. It should take some time, though not as much as downloading Xcode.

3. Accept Xcode licences

You need to have accepted the Xcode licence agreements to use many related programmes, so open Terminal and type in the following (you can copy and paste it as usual):

sudo xcodebuild

This gives you a message requiring you to press return/enter to view the license. Hit enter, read through it and type in agree at the end when prompted. Type in cancel to disagree, meaning you should probably have been reading some other article now, not this. In any case, once you read and agree, proceed to the next step.

4. GCC

GCC should work straight after installing Xcode. Open Terminal and do the following:

vi Hello.c

In the page that opens, copy and paste these lines:

#include<stdio.h>
int main() {
printf("\n\n\t\t Hello, world.\n\n");
}

Then press Esc and type :wq and you will return to the first Terminal screen. There, type the following two lines (press Return after each line):

gcc Hello.c -o Hello

./Hello

You should get an output on screen saying ‘Hello, world.’ if all is well. And that is all for GCC, which is actually simple, unlike gnuplot and Octave.

5. LaTeX

MacTeX is LaTeX for Mac. For basic — and mostly all general purposes — I did not see the need to install the entire MacTeX package, which is 2.5GB in size. As an alternative, I opted for the smaller, yet feature-rich, BasicTeX package which I usually install on all my systems.

You can go with the full package or opt for the smaller one as per your needs. Both files are .pkg and involve the usual GUI installation so this step should be no problem for anyone.

6. Install XQuartz

XQuartz, or colloquially the X11.app, is a Mac version of Windows' X Server. It is necessary for Homebrew, which we will install next, as well as to use the (somewhat) standard xterm display in gnuplot. Download XQuartz, which comes as a disk image which you can easily mount and install via a package.

The catch here is that the installed X11.app (check Applications → Utilities → X11 after installing) must be started before the Terminal is fired up to run gnuplot. That is the only way to make gnuplot start with the x11 terminal by default. If you find gnuplot aborting, you have probably not started x11.

7. Get Homebrew

Since Macs do not come with a package manager built-in like Linux, you will have to opt for alternatives often developed by the community. While MacPorts has been a popular one, I have found it buggy, especially so with xterm (x11) display for gnuplot, which is both the standard and more convenient than Aqua, the default for Mac — do not ask me why.

Installing Homebrew, an increasingly more preferred MacPorts alternative for good reasons, is simple. Open Terminal and use the command given below:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

This should install Homebrew without a problem. Homebrew commands begin with brew and are fairly easy to follow.

8. Install gnuplot with brew

Installing gnuplot once you have come this far, thankfully, is fairly simple:

brew install gnuplot --with-x11

Remember to specify --with-x11 instead of the older (now deprecated) --x11 or brew install gnuplot without x11.

9. gnuplot

Now it is time to check gnuplot. Fire up gnuplot:

gnuplot

Then type in a simple plotting command and hit return/enter.

plot sin(x)

This should immediately display a generic sine graph (not saved permanently anywhere, of course) on your screen.

How to save and use plots is beyond the scope of this article, but for now, if you find the output graph giving out an error, make sure you have the x11.app running before you open Terminal.


Hat tip to Michael Budd After writing this article, I came across Michael Budd’s on installing gnuplot straight from the source files. I tried in vain to get x11 to work well after installing in this method, but perhaps I was not good enough in trying. That said, if you are not particular about x11, this is a really quick method to get gnuplot to work.

Note that when trying the third make install command, it failed on my system several times, so I had to work around that using sudo make install and entering my administrator password. If you do not have, administrator rights Mr Budd’s method may not work.


10. Setting a default display environment

Some machines may still have problems with gnuplot, in which case you could try editing your bash profile. Please try this at your own risk, and if you have no idea what this does, I strongly insist that you do not do it. In any case, I cannot be held responsible for whatever happens to your system.

touch ~/.bash_profile; open ~/.bash_profile

Enter the above command in your Terminal to open your .bash_profile for editing in TextEdit (or whatever else is your default editor). Next, in the open file, type the following and hit +S to save it, then close it.

if [ -z ${DISPLAY} ] then   export DISPLAY=:0.0 fi

Restart Terminal and it should work fine now. This has something to do with gnuplot sending data to the wrong display. You can find more details and a fair bit of discussion on this around the web.

That should be all. You should have a working copy of gnuplot on your machine now. Have fun with it: it supports eps export as well as links to c programmes via the pfile command. So far, in my use, I have not come across any problems after installing it this way, but if you do, let me know and I will try to help you as best as I can.

A new LaTeX class for lecture notes

A new document class for LaTeX built from the ground up for preparing lecture notes, entire course notes and support sheets for individual talks/classes.

So far I have had a fairly good semester teaching postgraduate physics. While preparing to teach classes I went through a simple routine: first, I decided what to cover based on the curriculum, followed by how best to present it; then, I prepared a bunch of talk sheets for myself so that I could be sure of covering everything I hoped to cover; finally, I prepared a bunch of lecture notes including, as far as possible, detailed exposition of everything covered in class.

The first step was necessary for me because I tend to deviate while discussing ideas because I constantly get bombarded with an exponentially increasing set of ideas with every individual seed of thought. The curriculum provided a foundation to build on and a flexible fence to build within. The second step was important for similar reasons: to keep me in check during my lectures. (That said, I was always open to continuing discussions after class hours.)

The third step, which some students and a surprisingly large number of outsiders benefited from (at least that is my impression based on the emails I have been getting over the past months), was a fairly detailed compilation of ideas discussed in class (but, unfortunately, not discussions outside of class) that was supposed to be minimum reading material for students of the course. They were expected to build from there with other sources as, when, and if, they found the need.

A dedicated LaTeX class

With apologies to Plato, necessity breeds creation and, between semesters, I decided to simplify my job of preparing lecture notes and save time by writing a new document class for LaTeX that would let me quickly enter important details as simply as possible and focus entirely on the content. I call this the lecture class and am quite pleased to say it is finally ready for real world use.

As far as I know, there was no class before this that was dedicated specifically to lecture notes (but I could be wrong). Perhaps there is good reason for this too: no two people have the same requirements, no two classes have the same needs, no two teachers have the same approach to a give topic. There is fair ground, I admit, to not prepare such a class at all. However, I intended to write this for myself to simplify my own workflow so making it available for free publicly would not hurt.

Some time after I started work on this, I found out that similar ideas had been put to paper a few years back by Stefano Maggiolo except with scribes in mind rather than speakers. (As of now his work has been abandoned.) Since I prefer to pen my own notes rather than allot that work to students/scribes, I had a slightly different approach in mind but, nonetheless, based on similar notions.

Having completed about half of my code I was lucky enough to be able to accelerate it when Stefano graciously let me rework parts of his code into my class. I have finally completed it and, as a little perk, since Stefano’s was originally in Italian, I retained the language in my class as a translation. Hopefully I can get equivalent translations in other languages as well down the road.

Addendum I was a little late in publishing this release article. As of now, v2.2 of this project is the latest stable release, and support for french has already been added.

Github + MIT

The lecture class is released on Github with an MIT license, which means you can do absolutely anything you want with it so long as you provide attribution (a link back to this webpage will do just fine) and retain the license and copyright notice with your distribution.

LaTeX lecture class page on Github

On Github, you will find instructions on how to download and use the lecture class so I will not repeat it here. Suffice it to say that if you do it right, it should only be a matter of copying and pasting and invoking.

The purpose of the lecture class remains to offer a quick yet flexible way of entering data common to all your lecture notes and ensuring your notes are formatted with some consistency throughout with no additional effort from you. For example, consider the following code:

\title{Give a nice title}
\author{Your Name}
\email{your@email.tld}

This produces a beautiful title for your document and the author’s byline beneath it along with a footnote that mentions the author’s email at the bottom of the page. Take a look at the sample.pdf file that is available on Github. (The sample.tex file that was used to generate this pdf document is also included with the project .)

Care has been taken to ensure that the lecture class works with pdfTeX, which is the most commonly used (probably most basic) compiler. What this means, of course, is that the class should work just fine with most other compilers, especially if you choose to extend its functionality. I was not a fan of retaining pdfTeX support but decided in its favour since it might as well be worth it for the lecture class to support broader use cases if nothing else.

If you find that the class throws errors please drop me an email (see link at the bottom of this website). Support is not guaranteed for extremely specific scenarios but if it is something that may affect a considerable number of users I will do my best to tackle it. In any case my reason for putting up my lecture class on Github is so that the community will be able to work on it. If you can sort things out yourself or if you have interesting ideas to expand the lecture class (see a fairly detailed list of features below), please fork the project and submit a pull request.

List of features (not exhaustive)

Three types of documents may be constructed using the lecture class. This has been detailed in the Github page as well, so I will keep it brief here: the talk style lets you make two-column, condensed documents to make highly selective notes—this is what I would prepare to take to my lectures; the seminar style lets you make more ‘regular’ notes for lectures—this is the type of notes I would prepare after a lecture as reading material for students; the course style lets you combine several lectures into a single, consolidated course or section of a course.

Computer Modern is boring, Palatino is quickly joining the ranks. The lecture class therefore typesets documents in Kp-Fonts, which has nothing to do with Adobe’s Kepler, which would look rather awful in a scientific document in my opinion. The best part about this is that it works beautifully with textcomp and amsmath with the full option. (Hat tip to The Johannes Kepler project.)

Note, however, that the lecture class does not use the lighter typefaces; this is something I have given considerable thought to and chose to keep the book weights for better compatibility with bad printers.

The microtype package is being called as well. I prefer the way fontspec does things myself but, as stated above, we need good pdfTeX compatibility and all. (You may have noticed by now that the lecture class seems to want to support almost everything without the prerequisite that it is a high-tech system.) Hopefully we can build this into an option system that calls packages regardless of their support for pdfTeX if the user so chooses.

Addendum The idea for such a system of options has been added to the roadmap on Github. It should be easy enough to accomplish and I would certainly like to see it, although I doubt I have the time to dedicate to it at this point.

The head of a document, whether a talk, a seminar or a course, contains the title, a subtitle, the speaker, the scribe, their e-mail addresses, a course code, two optional areas for any text you may wish to include, a start and end date (or just one date), a conference hall location and the name of the institution where the tall was given. Almost all of these are optional but they exist should you ever need them.

All course type documents also automatically produce a table of contents. Besides the options listed above you can also provide an alternate short title that the lecture class will use to set your page headers, because lengthy headers can be both unattractive and distracting.

The lecture class provides headings three levels deep: \section{}, \subsection{} and \subsubsection{}. The table of contents updates itself with these accordingly. As with any use of a table of contents, you will need to compile twice to actually produce the table in your output file.

In lieu of footnotes, the lecture class offers margin notes. There were two reasons why I chose to go in this direction: one, footnotes are hard to navigate to and from on screens whereas notes placed on the margin are right next to the text and can be associated with the main content quite easily; two, using a prominent outer margin in the layout means documents built with the lecture class offer ample space for readers to make their own notes, whether on screens or in print.

Usually align type environments have insufficient vertical spacing between lines of equations. This has been rectified by default letting the writer focus on the content rather than worry about formatting (which is why LaTeX exists in the first place). Also, the csquotes package is included in the lecture class allowing you to exploit its wonderful quote styles to set apart brief sections of your text. Finally, the regular skip/short skip spacing for all display environments have been set carefully and will work so long as you type your document properly, i.e. no unnecessary extra lines in your .tex file and so on (use % instead).

There are other packages that the lecture class calls for better formatting and to achieve certain other results (margins, for example). All of these should be available for free and should be easy to update and maintain as part of your own TeX installation. The (almost) full list of dependencies can be found on the Github page.

Nearly all questions you may have should be answered between this release article, the Github page of my lecture class, and the sample files included with the project. If you have more, get in touch via e-mail. If you wish to build on this, I hope you can fork the project on Github, if not, I would certainly appreciate it if you dropped me a word via e-mail; if you wish to modify, add etc., fork the project and we can merge it later.

This class was originally created for my own needs, as a result of which it likely focusses on what I need a lot more than what I think someone else may need, but it is definitely something worth building for a larger audience as we go regardless of how different our approaches may be. If you decide to use this document class for your own work, I would be humbled. I want to mention Stefano again for letting me use some of his code: you really sped things up. And thanks to those who helped test it out as well. Here’s to science.

Internet pollution

One of our greatest inventions has the potential to be our greatest undoing.

Pollution can be defined, in broad sense, as the introduction of something that is harmful to the environment it is introduced into. Recently, in a way I cannot explicitly describe, I found myself reading an excellent essay by Jasper Morrison, titled ‘Super normal’. Mr Morrison is a British designer known for various things, from the wingnut chair in Lindenplatz to Hanover’s TW2000 light railcars. In his essay, he talks of how design, which was supposed to be ‘responsible for the man-made environment’, has been polluting it instead. Describing good design as not merely normal but ‘super normal’, he goes on to explain that a lack of noticeability is the way to go today.

This analogy can actually be brought to the internet itself. What was started as a means of easier, faster communication has now crept into every inch of our lives making communication overwhelming while slowing down nearly everything else and making productivity an achievement. Both communication and productivity were ever only supposed to be a part of our lives. The internet should have resided in the background, making life easier, not taking it over.

Why did this happen? Why are so many people on the tipping point of addiction? Were people as addicted to the telephone when it was invented, or perhaps the inland letter when postage was first introduced? This brings forth an interesting question: what has the internet done that other media of communication before it did not? The simplest answer to this would be that the internet offers instant, open-ended interaction; this is something no other media does. And this seemingly fundamental point is overlooked far too often. With a television, interactions were strictly one-way; with letters, interactions took time; with telephones, interactions was targeted. However, with the internet, interactions are two-way, instant and open-ended, and this is precisely what lets it into every nook and cranny of our lives.

Human beings are social animals and the internet feeds on our strongest desires. The desire to be social, a decade or so ago, could only be afforded if someone was physically with us. This gave a certain weightage to it. The internet, in becoming a platform for communication failed to realise that by letting us communicate anytime, anywhere it was really exacerbating our perceived social wants. We started to want to communicate even if we did not need to. An excess is a pollutant. Communication, a fundamental human tendency, itself became a distraction. The internet was like a pollutant.

Alongside this, as a platform that made almost all information available readily, it brought along other problems. There was no vetting, unlike in a book or in a library. There were opinions, facts, and disinformation, all liberally available at the cost of a few clicks. A picture of a cat plastered on a street wall would be intriguing at first and then it would simply get monotonous and eventually irritating. In the physical world, citizens would move to have these pictures stripped.

Unfortunately, we do not view the virtual, online world the same way. Any number of cat pictures will do. And this is not an attack on cat photographs; replace it with any pointless source of entertainment and the reasoning holds. In attempting to make information freely available, the internet has made information a sort of getaway from our real lives; it is dangerously becoming a place to relax and temporarily push priorities to the back of your mind.

This was never the purpose of information, yet this is what it has the potential to be now. Information that was once supposed to enrich our minds is becoming synonymous to entertainment. The internet is, it appears, undoing its own building blocks, or, at the very least, moulding them into other, more convenient forms. There is information pollution all around us.

None of this is an attack on the internet as much as on our constant misuse of it. We use the internet as an escape rather than a tool. This is neither generic nor universal, but it does explain well enough why such websites as Facebook, Instagram, Snapchat and others are becoming popular. In the 1690s or even in the 1960s, the effort that went into sharing a description or a picture of what one ate for breakfast was so great that it was simply not worth it. It was, perhaps, laughable.

Today, we share it just because we have the internet and because we can. So what? In the ambience of the internet, the danger in this line of thinking is not readily apparent. Apply this analogy, therefore, to something else: I shoot because I have a gun and I can; I drive a car because I have one and I can. It makes no sense whatsoever. Whether it is a gun or a car, our use of either follows a specific need.

Between the 17th century and the 21st, no new need has cropped up that requires sharing pictures of our delicious food before we consume it. Once again, like cat pictures, I use this as a stand-in for most pieces of information shared on the web and it holds true. And with this pattern spread across various types of information and geographies and languages and cultures, it is not hard to see how much unnecessary, meaningless information we are spewing out every second. There is an exact number: 40GB every second. There is a website dedicated to give you a picture of just how much information is added to the website every second. It covers only the top few heavily used websites, mostly social networks, and the numbers itself are overwhelming. The actual data is unimaginable. And this is only from less than half the world, because the rest still have no proper internet access.

If everyone comes online and if we never stop to think, evaluate and weigh our contributions to the internet like we would to, say, a book that would get published or a live television show everyone would watch, then searching for useful, valuable information will be like looking for needles in a haystack. A dynamic haystack growing in size an complexity every second too. The internet really is a wonderful place, but by treating it as a dump yard for data, we are slowly making it useless. As we pollute it, it pollutes us. The internet is like a mirror of humanity (at least for that percentage of humanity that uses it) and we are what we make it.

Lastly, this is not a stab at the internet as an unbiased platform for free speech and expression. (Although, going by recent news of data being tweaked on user timelines on social networks, some parts of the internet quite well be biased.) The internet is and always will remain the strongest platform promoting free expression, but free does not mean careless or irresponsible. We should express freely but responsibly. And doing this might quieten the internet down a little but not undermine its standing as our strongest platform for expression. As Mr Morrison says of design, its historic goal “of conceiving things easier to make and better to live with, has been side-tracked”, so also have our deepest intentions that drove the founding of the internet been put aside to make place for our desire to share and be validated for it. The case here is to use it responsibly and to not toss in data much like we would not toss garbage into our homes. This will, if anything, strengthen the internet and make it infinitely more useful to us like it was always intended to be.