Bitcoin’s Time

No, this article is not about whether it’s Bitcoin’s moment to shine. It is a somewhat technical explainer on how Bitcoin implements time, or timestamps. First, we need to understand why Bitcoin needs a notion of time at all. If you don’t care for this, you can directly jump to the “How” section of the article below. The “Why” section of this article is somewhat opinionated. Why? Does money need timestamps? Bitcoin’s peers are a mixed bag when it comes to timestamping. ...

Trusting Trust

Ken Thompson is a Turing Award winner and an all-around genius – gave a seminal talk during his 1984 Turing Award acceptance called “Reflections on Trusting Trust.” In this lecture, he shows how to sneak a Trojan horse into your application (in his case, the Unix operating system) while you compile the source code of the application using the C compiler. Say the C compiler has malicious code in it that patiently waits for pieces of code that look like Unix’s source code, and the moment it knows that it is compiling this particular source code, it adds the Trojan horse code into the final Unix compiled binary. This Unix binary now has a Trojan horse and is no longer secure. ...

Governance, Decentralized

Define Governance : the act or process of governing or overseeing the control and direction of something (such as a country or an organization). In this article, I will focus on whether any organization can have decentralized governance, and what does that even mean? And how is this related to cryptocurrencies. Let’s start with a very basic organization, and see whether it can be governed in a decentralized way. What is an organization anyway? Say some people want to pool their money and use it for charity. We have ourselves a rudimentary organization. During the organization’s inception, the founders make some bylaws – for example: for any charitable donation to happen, say 2/3rd of the remaining capital in the pool has to approve it. These bylaws are written down formally in a “human language” (the language being a “human language” is important). The organization will register itself with the government of that geographical area (let’s say, a country). In case disputes arise in the future, the courts of that country will interpret the bylaws of the organization, apply the relevant common laws of that country, and with the threat of force, ask the members of the organization to abide by the court’s judgment. We kind of get how this works. ...

So much to learn, ponder about, and do….

And so much time to do it. Scott Aaronson’s “rant” on naysayers of complexity theory got me pondering. Over the last few months, my general scientific pondering has suffered at the hands of real work, startup life routine, system building, and a general state of being occupied by the trite. It’s fascinating, but not inspiring. The days are exciting, but not ecstatic. The nights are cathartic, but not liberating. I miss my aimless pondering days. ...

Genius

The Feynman Method Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!” ...

Counter Example

Finding counter examples to conjectures can be notoriously hard (pun? I think not). This is an area of creativity that mostly goes unappreciated. Here’s a personal anecdote: my father once came up with an algorithm to solve a hugely constrained version of the traveling salesman problem. The greedy proof was slightly hand-wavy, and I felt it would be an easier thing to find a counter example where his algorithm wouldn’t find the optimal tour. Of course, I was just trying to tell him that he couldn’t have solved TSP (or even approximated it). I learnt two lessons that day. ...

Eigen

Jatin forced a comment which is worth expanding a little more. Ever since I found that the PageRank vector of the web-graph is the dominant eigenvector of its matrix representation, I have been meaning to get to the bottom of this eigenvector-eigenvalue concept. I am still snorkeling; long time to scuba dive. Most of us studied concepts like simultaneous equations in our high school algebra classes, but never really wondered about them deeply, or even more so, felt that they were difficult. Problems like – 3 oranges and 4 apples cost 17 rupees; 2 oranges and 3 apples cost 12 rupees; how much does each apple cost? – never seemed that difficult. We knew that the equations that represented these statements were ‘fluid,’ and need not apply to just these statements, and an overall tool was being mastered. ...

Heuristics vs. Provability

Given a problem, we can: Give efficient methods to find exact solutions. When this is possible, all’s well. But alas, this is not possible all the time. Give inefficient methods to find exact solutions (brute force), and bemoan that P != NP (mostly). Propose intuitively appealing heuristics that are quick, and give “mostly good” results in practice, and hope that the malicious worst case input which can tear the heuristic apart never comes about. Prove theoretical approximation guarantees about these heuristics so that even in the worst case, the heuristic won’t go bad beyond some known bound. Personally, I am fascinated by #4. Most current large scale problems rely on #3, which hopefully will get elevated to #4 in a few years time, and theoreticians will tell the practitioners – “Yeah, your heuristic will never give a solution that if off the optimal by 200%.” As a budding researcher, is it better to build simulations, concrete systems, models, etc., and show empirically that certain heuristics work most of the time? Or is it better to take some small heuristic and prove without doubt its effectiveness under all circumstances? ...

Quantum Brain?!?!

In my last post on heuristics and approximation, I asked the question – Given realistic scale and scope, how does the human brain solve such problems (like question-answering)? Does the brain map everything to combinatorial optimization? Can its modus operandi be applied to automated systems so that they solve our problems…. – which leads to further interesting exercises in intellectual masturbation. Combinational Optimization is about finding the optimal (best/worst) candidate (based on certain acceptable parameters) among all possible options. A large category of problems can be mapped to combinatorial optimization, and most of these are NP-hard. This means that most of them do not have efficient solutions. ...

Traveling Salesman

The traveling salesman problem (TSP) is a very simple combinatorial optimization problem. Given a set of cities and distances between each pair, the problem is to find the tour of least distance that travels each city exactly once. Simple – as in simple to state. But from what I have seen this seems to be the hardest of the hard problems to solve. To give some background, there are these famous hard problems in Computer Science that do not have “efficient” solutions. Brute force solutions work very easily on them, but take an eternity to run. In spite of 40 years of extensive research by some very smart people, brute force seems to be the only solution. For example, the TSP can be solved by listing all possible tours covering the cities once, and picking the one with the least length. This takes an eternity as there are an exponential number of such tours (n!). These problems are the (in)famous NP-Hard problems. Stephen Cook at the University of Toronto proved that if one of them can be solved efficiently, all of them can be!!! A very intuitively pleasing fact. ...