Archive

Archive for the ‘Programming’ Category

Chocolatey For Business / Chocolatey Professional Coming May 2

April 23rd, 2016 No comments

This is a very exciting time for Chocolatey! Over the past 5 years, there have been some amazing points in Chocolatey’s history. Now we are less than 10 days from another historical moment for Chocolatey – when licensed editions become available for purchase! This is the moment when we are able to offer features that enable businesses to better manage software through Chocolatey and offer non-free features to our community! This also marks when the community (and organizations) take the next step to ensure the longevity of Chocolatey for the next 10-20 years. I started this process with a dream and a Kickstarter and now it’s finally coming to fruition!

Features

Here is a list of the licensed features that will be coming in May. I really think you are going to like what we’ve been cooking up:

  • Malware protection / Virus scanning – Automatic protection from software flagged by multiple virus scanners – …
  • No more 404s – Alternate permanent download location for Professional customers. …
  • Integration with existing Antivirus – Great for businesses that don’t want to reach out to VirusTotal.
  • (Business Only) Create packages from software files/installers – Do you keep all the applications you install for your business internally somewhere? Chocolatey can automatically create packages for all the software your organization uses in under 5 minutes! – Shown as a preview in a March webinar (fast forward to 36:45)
  • Install Directory Switch – You no longer need to worry about the underlying directives to send to native installers to install software into special locations. You can simply pass one directory switch to Chocolatey and it will handle this for you.
  • Support and prioritization of bugs and features for customers.

Sold! But How Do I Buy?

While we are still getting the front end systems setup and ensuring all of the backend systems are in place and working properly, we are limiting availability to the first 500 professional licenses and 20 businesses (Note: we do not expect any issues with our payment processor). Because we are limiting availability, you must register for the Go Live Event at https://chocolatey.eventbrite.com if you are interested.

It bears repeating, the links for purchase will only be sent to folks who have registered for the event, so secure your spot now!

Categories: Others, Programming Tags:

Chocolatey has a New Logo!!!

April 19th, 2016 No comments
new_icon

A designer started a conversation with us in December 2014 and we’ve recently come to a decision point on Chocolatey – a new logo (and soon a new website)! A special thanks goes out to Julian Krispel-Samsel!

Categories: Others, Programming Tags:

Chocolatey Has a New Logo!!!

April 19th, 2016 No comments
new_icon

A designer started a conversation with us in December 2014 and we’ve recently come to a decision point on Chocolatey – a new logo (and soon a new website)! A special thanks goes out to Julian Krispel-Samsel!

Categories: Others, Programming Tags:

Here’s The Programming Game You Never Asked For

April 15th, 2016 No comments

You know what’s universally regarded as un-fun by most programmers? Writing assembly language code.

As Steve McConnell said back in 1994:

Programmers working with high-level languages achieve better productivity and quality than those working with lower-level languages. Languages such as C++, Java, Smalltalk, and Visual Basic have been credited with improving productivity, reliability, simplicity, and comprehensibility by factors of 5 to 15 over low-level languages such as assembly and C (Brooks 1987, Jones 1998, Boehm 2000). You save time when you don’t need to have an awards ceremony every time a C statement does what it’s supposed to.

Assembly is a language where, for performance reasons, every individual command is communicated in excruciating low level detail directly to the CPU. As we’ve gone from fast CPUs, to faster CPUs, to multiple absurdly fast CPU cores on the same die, to “gee, we kinda stopped caring about CPU performance altogether five years ago”, there hasn’t been much need for the kind of hand-tuned performance you get from assembly in mainstream programming. Sure, there are the occasional heroics, and they are amazing, but in terms of Getting Stuff Done, assembly has been well off the radar of mainstream programming for probably twenty years now, and for good reason.

So who in their right mind would take up tedious assembly programming today? Yeah, nobody. But wait! What if I told you your Uncle Randy had just died and left behind this mysterious old computer, the TIS-100?

And what if I also told you the only way to figure out what that TIS-100 computer was used for – and what good old Uncle Randy was up to – is to fix its corrupted boot sequence … using assembly language? From reading a (blessedly short 14 page) photocopied reference manual?

Well now, by God, it’s time to learn us some assembly and get to the bottom of this mystery, isn’t it? As its creator notes, this is the assembly language programming game you never asked for!

I was surprised to discover my co-founder Robin Ward liked TIS-100 so much that he not only played the game (presumably to completion) but wrote a TIS-100 emulator in C. This is apparently the kind of thing he does for fun, in his free time, when he’s not already working full time with us programming Discourse. Programmers gotta … program.

Of course there’s a long history of programming games. What makes TIS-100 unique is the way it fetishizes assembly programming, while most programming games take it a bit easier on you by easing you in with general concepts and simpler abstractions. But even “simple” programming games can be quite difficult. Consider one of my favorites on the Apple II, Rocky’s Boots, and its sequel, Robot Odyssey. I loved this game, but in true programming fashion it was so difficult that finishing it in any meaningful sense was basically impossible:

Let me say: Any kid who completes this game while still a kid (I know only one, who also is one of the smartest programmers I’ve ever met) is guaranteed a career as a software engineer. Hell, any adult who can complete this game should go into engineering. Robot Odyssey is the hardest damn “educational” game ever made. It is also a stunning technical achievement, and one of the most innovative games of the Apple IIe era.

Visionary, absurdly difficult games such as this gain cult followings. It is the game I remember most from my childhood. It is the game I love (and despise) the most, because it was the hardest, the most complex, the most challenging. The world it presented was like being exposed to Plato’s forms, a secret, nonphysical realm of pure ideas and logic. The challenge of the game—and it was one serious challenge—was to understand that other world. Programmer Thomas Foote had just started college when he picked up the game: “I swore to myself,” he told me, “that as God is my witness, I would finish this game before I finished college. I managed to do it, but just barely.”

I was happy dinking around with a few robots that did a few things, got stuck, and moved on to other things. To be honest, I got a little turned off by the way it treated programming as electrical engineering, and messing around with a ton of AND OR and NOT gates was just not my jam. I was already cutting my teeth on BASIC by that point and I sensed a level of mastery was necessary here that I didn’t have and probably didn’t want. I mean seriously, look at this:

I’ll take a COBOL code listing over that monstrosity any day of the week. Perhaps Robot Odyssey was so hard because, in the end, it was a bare metal CPU programming simulation.

A more gentle example of a modern programming game is Tomorrow Corporation’s excellent Human Resource Machine.

It has exactly the irreverent sense of humor you’d expect from the studio that built World of Goo and Little Inferno, both excellent and highly recommendable games in their own right. It starts with only 2 instructions and slowly widens to include 11. If you’ve ever wanted to find out if someone is interested in programming, recommend this game to them and find out. Corporate drudgery has never been so … er, fun?

I’m thinking about this because I believe there’s a strong connection between these kinds of programming games and being a talented software engineer. It’s that essential sense of play, the idea that you’re experimenting with this stuff because you enjoy it, and you bend it to your will out of the sheer joy of creation more than anything else. As I once said:

Joel implied that good programmers love programming so much they’d do it for no pay at all. I won’t go quite that far, but I will note that the best programmers I’ve known have all had a lifelong passion for what they do. There’s no way a minor economic blip would ever convince them they should do anything else. No way. No how.

Here’s where I am going with this: I’d rather sit a potential hire in front of Human Resource Machine and time how long it takes them to work through a few levels than have them solve FizzBuzz for me on a whiteboard. Is this merely about attaining competency in a certain technical skill that’s worth a certain amount of money, or are you improvising and having fun?

That’s why I was so excited when Patrick, Thomas, and Erin founded Starfighter.

If you want to know how good a programmer is, give them a real-ish simulation of a real-ish system to hack against and experiment with – and see how far they get. In security parlance, this is known as a CTF, as popularized by Defcon. But it’s rarely extended to programming until now. Their first simulation is StockFighter.

Participants are given:

  • An interactive trading blotter interface
  • A real, functioning set of limit-order-book venues
  • A carefully documented JSON HTTP API, with an API explorer
  • A series of programming missions.

Participants are asked to:

  • Implement programmatic trading against a real exchange in a thickly traded market.
  • Execute block-shopping trading strategies.
  • Implement electronic market makers.
  • Pull off an elaborate HFT trading heist.

This is a seriously next level hiring strategy, far beyond anything else I’ve seen out there. It’s so next level that to be honest, I got really jealous reading about it, because I’ve felt for a long time that Stack Overflow should be doing yearly programming game events exactly like this, with special one-time badges obtainable only by completing certain levels on that particular year. As I’ve said many times, Stack Overflow is already a sort of game, but people would go nuts for a yearly programming game event. Absolutely bonkers.

I know we’ve talked about giving lip service to the idea of hiring the best, but if that’s really what you want to do, the best programmers I’ve ever known have excelled at exactly the situation that Starfighter simulates — live troubleshooting and reverse engineering an existing system, even to the point of finding rare exploits.

Consider the dedication of this participant who built a complete wireless trading device for StockFighter. Was it necessary? Was it practical? No. It’s the programming game we never asked for.

An arbitrary programming game is neither practical nor necessary, but it is a wonderful expression of the inherent joy in playing and experimenting with code. If I could find them, I’d gladly hire a dozen people just like that any day, and set them loose on our very real programming project.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Others, Programming Tags:

Here’s The Programming Game You Never Asked For

April 15th, 2016 No comments

You know what’s universally regarded as un-fun by most programmers? Writing assembly language code.

As Steve McConnell said back in 1994:

Programmers working with high-level languages achieve better productivity and quality than those working with lower-level languages. Languages such as C++, Java, Smalltalk, and Visual Basic have been credited with improving productivity, reliability, simplicity, and comprehensibility by factors of 5 to 15 over low-level languages such as assembly and C. You save time when you don’t need to have an awards ceremony every time a C statement does what it’s supposed to.

Assembly is a language where, for performance reasons, every individual command is communicated in excruciating low level detail directly to the CPU. As we’ve gone from fast CPUs, to faster CPUs, to multiple absurdly fast CPU cores on the same die, to “gee, we kinda stopped caring about CPU performance altogether five years ago”, there hasn’t been much need for the kind of hand-tuned performance you get from assembly. Sure, there are the occasional heroics, and they are amazing, but in terms of Getting Stuff Done, assembly has been well off the radar of mainstream programming for probably twenty years now, and for good reason.

So who in their right mind would take up tedious assembly programming today? Yeah, nobody. But wait! What if I told you your Uncle Randy had just died and left behind this mysterious old computer, the TIS-100?

And what if I also told you the only way to figure out what that TIS-100 computer was used for – and what good old Uncle Randy was up to – was to read a (blessedly short 14 page) photocopied reference manual and fix its corrupted boot sequence … using assembly language?

Well now, by God, it’s time to learn us some assembly and get to the bottom of this mystery, isn’t it? As its creator notes, this is the assembly language programming game you never asked for!

I was surprised to discover my co-founder Robin Ward liked TIS-100 so much that he not only played the game (presumably to completion) but wrote a TIS-100 emulator in C. This is apparently the kind of thing he does for fun, in his free time, when he’s not already working full time with us programming Discourse. Programmers gotta … program.

Of course there’s a long history of programming games. What makes TIS-100 unique is the way it fetishizes assembly programming, while most programming games take it a bit easier on you by easing you in with general concepts and simpler abstractions. But even “simple” programming games can be quite difficult. Consider one of my favorites on the Apple II, Rocky’s Boots, and its sequel, Robot Odyssey. I loved this game, but in true programming fashion it was so difficult that finishing it in any meaningful sense was basically impossible:

Let me say: Any kid who completes this game while still a kid (I know only one, who also is one of the smartest programmers I’ve ever met) is guaranteed a career as a software engineer. Hell, any adult who can complete this game should go into engineering. Robot Odyssey is the hardest damn “educational” game ever made. It is also a stunning technical achievement, and one of the most innovative games of the Apple IIe era.

Visionary, absurdly difficult games such as this gain cult followings. It is the game I remember most from my childhood. It is the game I love (and despise) the most, because it was the hardest, the most complex, the most challenging. The world it presented was like being exposed to Plato’s forms, a secret, nonphysical realm of pure ideas and logic. The challenge of the game—and it was one serious challenge—was to understand that other world. Programmer Thomas Foote had just started college when he picked up the game: “I swore to myself,” he told me, “that as God is my witness, I would finish this game before I finished college. I managed to do it, but just barely.”

I was happy dinking around with a few robots that did a few things, got stuck, and moved on to other games. I got a little turned off by the way it treated programming as electrical engineering; messing around with a ton of AND OR and NOT gates was just not my jam. I was already cutting my teeth on BASIC by that point and I sensed a level of mastery was necessary here that I probably didn’t have and I wasn’t sure I even wanted.

I’ll take a COBOL code listing over that monstrosity any day of the week. Perhaps Robot Odyssey was so hard because, in the end, it was a bare metal CPU programming simulation, like TIS-100.

A more gentle example of a modern programming game is Tomorrow Corporation’s excellent Human Resource Machine.

It has exactly the irreverent sense of humor you’d expect from the studio that built World of Goo and Little Inferno, both excellent and highly recommendable games in their own right. If you’ve ever wanted to find out if someone is truly interested in programming, recommend this game to them and see. It starts with only 2 instructions and slowly widens to include 11. Corporate drudgery has never been so … er, fun?

I’m thinking about this because I believe there’s a strong connection between programming games and being a talented software engineer. It’s that essential sense of play, the idea that you’re experimenting with this stuff because you enjoy it, and you bend it to your will out of the sheer joy of creation more than anything else. As I once said:

Joel implied that good programmers love programming so much they’d do it for no pay at all. I won’t go quite that far, but I will note that the best programmers I’ve known have all had a lifelong passion for what they do. There’s no way a minor economic blip would ever convince them they should do anything else. No way. No how.

I’d rather sit a potential hire in front of Human Resource Machine and time how long it takes them to work through a few levels than have them solve FizzBuzz for me on a whiteboard. Is this interview about demonstrating competency in a certain technical skill that’s worth a certain amount of money, or showing me how you can improvise and have fun?

That’s why I was so excited when Patrick, Thomas, and Erin founded Starfighter.

If you want to know how competent a programmer is, give them a real-ish simulation of a real-ish system to hack against and experiment with – and see how far they get. In security parlance, this is known as a CTF, as popularized by Defcon. But it’s rarely extended to programming, until now. Their first simulation is StockFighter.

Participants are given:

  • An interactive trading blotter interface
  • A real, functioning set of limit-order-book venues
  • A carefully documented JSON HTTP API, with an API explorer
  • A series of programming missions.

Participants are asked to:

  • Implement programmatic trading against a real exchange in a thickly traded market.
  • Execute block-shopping trading strategies.
  • Implement electronic market makers.
  • Pull off an elaborate HFT trading heist.

This is a seriously next level hiring strategy, far beyond anything else I’ve seen out there. It’s so next level that to be honest, I got really jealous reading about it, because I’ve felt for a long time that Stack Overflow should be doing yearly programming game events exactly like this, with special one-time badges obtainable only by completing certain levels on that particular year. Stack Overflow is already a sort of game, but people would go nuts for a yearly programming game event. Absolutely bonkers.

I know we’ve talked about giving lip service to the idea of hiring the best, but if that’s really what you want to do, the best programmers I’ve ever known have excelled at exactly the situation that Starfighter simulates — live troubleshooting and reverse engineering of an existing system, even to the point of finding rare exploits.

Consider the dedication of this participant who built a complete wireless trading device for StockFighter. Was it necessary? Was it practical? No. It’s the programming game we never asked for. But here we are, regardless.

An arbitrary programming game, particularly one that goes to great lengths to simulate a fictional system, is a wonderful expression of the inherent joy in playing and experimenting with code. If I could find them, I’d gladly hire a dozen people just like that any day, and set them loose on our very real programming project.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Others, Programming Tags:

Bash on Windows–What it Means for Chocolatey

March 31st, 2016 No comments

Microsoft announced the most amazing thing at //build/ yesterday, Bash on Windows 10. Not some sort of VM or container, but running native ELF binaries on Windows under an Ubuntu subsystem. Let me say that again slowly. Windows running native Linux binaries. Not recompiled. Go read http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.html, I’ll wait.

Linux geeks can think of it sort of the inverse of “wine” — Ubuntu binaries running natively in Windows. Microsoft calls it their “Windows Subsystem for Linux” –Dustin Kirkland

In case you missed the announcement, head to https://channel9.msdn.com/Events/Build/2016/KEY01 and fast forward to 48:15.

Almost immediately folks started asking what this means for Chocolatey. It’s a great question. Here’s the low down. This is fantastic for Chocolatey! You now have a fantastic way to get Unix apps and utilities with dpkg/apt in addition to great Windows apps and software with choco. More developers are going to be using the terminal to do things. It means more users of both apt and choco. More productivity for Windows users and developers. Think about that for a second. On no other platform will you have this ability. It’s an exciting time to be in Windows!

What you can expect to see is more collaboration between choco and apt if they can communicate. Just like you can work with choco install -–source windowsfeatures (back in the latest 0.9.10 betas!), expect to see choco install rsync -–source apt. https://github.com/chocolatey/choco/issues/678

Coming up soon you are going to see what’s coming in the next version of Chocolatey and why it is going to amaze you as another big leap in package management for Windows!

Here’s a preview with PowerShell tab completion and updating path (environment variables) without needing to restart PowerShell (https://raw.githubusercontent.com/wiki/chocolatey/choco/images/gifs/choco_install.gif if the image doesn’t show):

Categories: Others, Programming Tags:

Bash on Windows–What it Means for Chocolatey

March 31st, 2016 No comments

Microsoft announced the most amazing thing at //build/ yesterday, Bash on Windows 10. Not some sort of VM or container, but running native ELF binaries on Windows under an Ubuntu subsystem. Let me say that again slowly. Windows running native Linux binaries. Not recompiled. Go read http://blog.dustinkirkland.com/2016/03/ubuntu-on-windows.html, I’ll wait.

Linux geeks can think of it sort of the inverse of “wine” — Ubuntu binaries running natively in Windows.  Microsoft calls it their “Windows Subsystem for Linux” –Dustin Kirkland

In case you missed the announcement, head to https://channel9.msdn.com/Events/Build/2016/KEY01 and fast forward to 48:15.

Almost immediately folks started asking what this means for Chocolatey. It’s a great question. Here’s the low down. This is fantastic for Chocolatey! You now have a fantastic way to get Unix apps and utilities with dpkg/apt in addition to great Windows apps and software with choco. More developers are going to be using the terminal to do things. It means more users of both apt and choco. More productivity for Windows users and developers. Think about that for a second. On no other platform will you have this ability. It’s an exciting time to be in Windows!

What you can expect to see is more collaboration between choco and apt if they can communicate. Just like you can work with choco install -–source windowsfeatures (back in the latest 0.9.10 betas!), expect to see choco install rsync -–source apt. https://github.com/chocolatey/choco/issues/678

Coming up soon you are going to see what’s coming in the next version of Chocolatey and why it is going to amaze you as another big leap in package management for Windows!

Here’s a preview with PowerShell tab completion and updating path (environment variables) without needing to restart PowerShell (https://raw.githubusercontent.com/wiki/chocolatey/choco/images/gifs/choco_install.gif if the image doesn’t show):

Categories: Others, Programming Tags:

Celebrating 5 Years With Chocolatey!

March 28th, 2016 No comments
Chocolatey usage by downloads over the years 2013-2015

Chocolatey turned 5 years old recently! I committed the first lines of Chocolatey code on March 22, 2011. At that time I never imagined that Chocolatey would grow into a flourishing community and a tool that is widely used by individuals and organizations to help automate the wild world of Windows software. It’s come a long way since I first showed off early versions of Chocolatey to some friends for feedback. Over the last 2 years things have really taken off!

The number of downloads has really increased year over year!

Note: While not a completely accurate representation of usage and popularity, the number of downloads gives a pretty good context. Going up by 7 million in 2014 and then by almost 30 million downloads in one year really shows a trend!

Note: The Chocolatey package has about 1,000 downloads per hour. I shut off the statistics for the install script back in October 2015 due to the extreme load on the site, so the number of Chocolatey package downloads is missing some of the statistics.

History

Let’s take a little stroll through some of the interesting parts of Chocolatey’s history. The history of Chocolatey really starts when I joined the Nubular (Nu) team in summer 2010.

This doesn’t represent everything that has happened. I tried to list out and attribute everything I could find and remember. There have been so many amazing package maintainers over the years, there are too many of you to possibly list. You know who you are. You have made the community what it is today and have been instrumental in shaping enhancements in Chocolatey.

Looking to the Future

The community has been amazing in helping Chocolatey grow and showing that there is a need that it fills. Package maintainers have put in countless and sometimes thankless hours to ensure community growth and consumers have really found the framework useful! Thank you so much! The next version of Chocolatey is coming and it is going to be amazing. Here’s to the next 5 years, may we change the world of Windows forever!

Categories: Others, Programming Tags:

Thanks For Ruining Another Game Forever, Computers

March 25th, 2016 No comments

In 2006, after visiting the Computer History Museum’s exhibit on Chess, I opined:

We may have reached an inflection point. The problem space of chess is so astonishingly large that incremental increases in hardware speed and algorithms are unlikely to result in meaningful gains from here on out.

So. About that. Turns out I was kinda … totally completely wrong. The number of possible moves, or “problem space”, of Chess is indeed astonishingly large, estimated to be 1050:

100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

Deep Blue was interesting because it forecast a particular kind of future, a future where specialized hardware enabled brute force attack of the enormous chess problem space, as its purpose built chess hardware outperformed general purpose CPUs of the day by many orders of magnitude. How many orders of magnitude? In the heady days of 1997, Deep Blue could evaluate 200 million chess positions per second. And that was enough to defeat Kasparov, the highest ever ranked human player – until 2014 at least. Even though one of its best moves was the result of a bug.

200,000,000

In 2006, about ten years later, according to the Fritz Chess benchmark, my PC could evaluate only 4.5 million chess positions per second.

4,500,000

Today, about twenty years later, that very same benchmark says my PC can evaluate a mere 17.2 million chess positions per second.

17,200,000

Ten years, four times faster. Not bad! Part of that is I went from dual to quad core, and these chess calculations scale almost linearly with the number of cores. An eight core CPU, no longer particularly exotic, could probably achieve ~28 million on this benchmark today.

28,000,000

I am not sure the scaling is exactly linear, but it’s fair to say that even now, twenty years later, a modern 8 core CPU is still about an order of magnitude slower at the brute force task of evaluating chess positions than what Deep Blue’s specialized chess hardware achieved in 1997.

But here’s the thing: none of that speedy brute forcing matters today. Greatly improved chess programs running on mere handheld devices can perform beyond grandmaster level.

In 2009 a chess engine running on slower hardware, a 528 MHz HTC Touch HD mobile phone running Pocket Fritz 4 reached the grandmaster level – it won a category 6 tournament with a performance rating of 2898. Pocket Fritz 4 searches fewer than 20,000 positions per second. This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.

As far as chess goes, despite what I so optimistically thought in 2006, it’s been game over for humans for quite a few years now. The best computer chess programs, vastly more efficient than Deep Blue, combined with modern CPUs which are now finally within an order of magnitude of what Deep Blue’s specialized chess hardware could deliver, play at levels far beyond what humans can achieve.

Chess: ruined forever. Thanks, computers. You jerks.

Despite this resounding defeat, there was still hope for humans in the game of Go. The number of possible moves, or “problem space”, of Go is estimated to be 10170:

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

Remember that Chess had a mere fifty zeroes there? Go has more possible moves than there are atoms in the universe.

Wrap your face around that one.

Deep Blue was a statement about the inevitability of eventually being able to brute force your way around a difficult problem with the constant wind of Moore’s Law at your back. If Chess is the quintessential European game, Go is the quintessential Asian game. Go requires a completely different strategy. Go means wrestling with a problem that is essentially impossible for computers to solve in any traditional way.

A simple material evaluation for chess that works well, where each type of piece is given a value, and each player recieves a score depending on his/her remaining pieces. The player with the higher score is deemed to be ‘winning’ at that stage of the game.

However, Chess programmers innocently asking Go players for an evaluation function would be met with disbelief! No such simple evaluation exists. Since there is only a single type of piece, only the number each player has on the board could be used for a simple material heuristic, and there is almost no discernible correlation between the number of stones on the board and what the end result of the game will be.

Analysis of a problem this hard, with brute force completely off the table, is colloquially called “AI”, though that term is a bit of a stretch to me. I prefer to think of it as building systems that can learn from experience, aka machine learning. Here’s a talk which covers DeepMind learning to play classic Atari 2600 videogames. (Jump to the 10 minutes mark to see what I mean.)

As impressive as this is – and it truly is – bear in mind that games as simple as Pac-Man still remain far beyond the reach of Deep Mind. But what happens when you point a system like that at the game of Go?

DeepMind built a system, AlphaGo, designed to see how far they could get with those approaches in the game of Go. AlphaGo recently played one of the best Go players in the world, Lee Sedol, and defeated him in a stunning 4-1 display. Being the optimist that I am, I guessed that DeepMind would win one or two games, but a near total rout like this? Incredible. In the space of just 20 years, computers went from barely beating the best humans at Chess, with a problem space of 1050, to definitively beating the best humans at Go, with a problem space of 10170. How did this happen?

Well, a few things happened, but one unsung hero in this transformation is the humble video card, or GPU.

Consider this breakdown of the cost of floating point operations over time, measured in dollars per gigaflop:

1961 $8,300,000,000
1984 $42,780,000
1997 $42,000
2000 $1,300
2003 $100
2007 $52
2011 $1.80
2012 $0.73
2013 $0.22
2015 $0.08

What’s not clear in this table is that after 2007, all the big advances in FLOPS came from gaming video cards designed for high speed real time 3D rendering, and as an incredibly beneficial side effect, they also turn out to be crazily fast at machine learning tasks.

The Google Brain project had just achieved amazing results — it learned to recognize cats and people by watching movies on YouTube. But it required 2,000 CPUs in servers powered and cooled in one of Google’s giant data centers. Few have computers of this scale. Enter NVIDIA and the GPU. Bryan Catanzaro in NVIDIA Research teamed with Andrew Ng’s team at Stanford to use GPUs for deep learning. As it turned out, 12 NVIDIA GPUs could deliver the deep-learning performance of 2,000 CPUs.

Let’s consider a related case of highly parallel computation. How much faster is a GPU at password hashing?

Radeon 7970 8213.6 M c/s
6-core AMD CPU 52.9 M c/s

Only 155 times faster right out of the gate. No big deal. On top of that, CPU performance has largely stalled in the last decade. While more and more cores are placed on each die, which is great when the problems are parallelizable – as they definitely are in this case – the actual performance improvement of any individual core over the last 5 to 10 years is rather modest.

But GPUs are still doubling in performance every few years. Consider password hash cracking expressed in the rate of hashes per second:

GTX 295 2009 25k
GTX 690 2012 54k
GTX 780 Ti 2013 100k
GTX 980 Ti 2015 240k

The latter video card is the one in my machine right now. It’s likely the next major revision from Nvidia, due later this year, will double these rates again.

(While I’m at it, I’d like to emphasize how much it sucks to be an 8 character password in today’s world. If your password is only 8 characters, that’s perilously close to no password at all. That’s also why why your password is (probably) too damn short. In fact, we just raised the minimum allowed password length on Discourse to 10 characters, because annoying password complexity rules are much less effective in reality than simply requiring longer passwords.)

Distributed AlphaGo used 1202 CPUs and 176 GPUs. While that doesn’t sound like much, consider that as we’ve seen, each GPU can be up to 150 times faster at processing these kinds of highly parallel datasets — so those 176 GPUs were the equivalent of adding ~26,400 CPUs to the task.

Even if you don’t care about video games, they happen to have a profound accidental impact on machine learning improvements. Every time you see a new video card release, don’t think “slightly nicer looking games” think “wow, hash cracking and AI just got 2× faster … again!”

I’m certainly not making the same mistake I did when looking at Chess in 2006. (And in my defense, I totally did not see the era of GPUs as essential machine learning aids coming, even though I am a gamer.) If AlphaGo was intimidating today, having soundly beaten the best human Go player in the world, it’ll be no contest after a few more years of GPUs doubling and redoubling their speeds again.

AlphaGo, broadly speaking, is the culmination of two very important trends in computing:

  1. Huge increases in parallel processing power driven by consumer GPUs and videogames, which started in 2007. So if you’re a gamer, congratulations! You’re part of the problem-slash-solution.

  2. We’re beginning to build sophisticated (and combined) algorithmic approaches for entirely new problem spaces that are far too vast to even begin being solved by brute force methods alone. And these approaches clearly work, insofar as they mastered one of the hardest games in the world, one that many thought humans would never be defeated in.

Great. Another game ruined by computers. Jerks.

Based on our experience with Chess, and now Go, we know that computers will continue to beat us at virtually every game we play, in the same way that dolphins will always swim faster than we do. But what if that very same human mind was capable of not only building the dolphin, but continually refining it until they arrived at the world’s fastest minnow? Where Deep Blue was the more or less inevitable end result of brute force computation, AlphaGo is the beginning of a whole new era of sophisticated problem solving against far more enormous problems. AlphaGo’s victory is not a defeat of the human mind, but its greatest triumph.

(If you’d like to learn more about the powerful intersection of sophisticated machine learning algorithms and your GPU, read this excellent summary of AlphaGo and then download the DeepMind Atari learner and try it yourself.)

[advertisement] Find a better job the Stack Overflow way – what you need when you need it, no spam, and no scams.
Categories: Others, Programming Tags:

Thanks For Ruining Another Game Forever, Computers

March 25th, 2016 No comments

In 2006, after visiting the Computer History Museum’s exhibit on Chess, I opined:

We may have reached an inflection point. The problem space of chess is so astonishingly large that incremental increases in hardware speed and algorithms are unlikely to result in meaningful gains from here on out.

So. About that. Turns out I was kinda … totally completely wrong. The number of possible moves, or “problem space”, of Chess is indeed astonishingly large, estimated to be 1050:

100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

Deep Blue was interesting because it forecast a particular kind of future, a future where specialized hardware enabled brute force attack of the enormous chess problem space, as its purpose built chess hardware outperformed general purpose CPUs of the day by many orders of magnitude. How many orders of magnitude? In the heady days of 1997, Deep Blue could evaluate 200 million chess positions per second. And that was enough to defeat Kasparov, the highest ever ranked human player – until 2014 at least. Even though one of its best moves was the result of a bug.

200,000,000

In 2006, about ten years later, according to the Fritz Chess benchmark, my PC could evaluate only 4.5 million chess positions per second.

4,500,000

Today, about twenty years later, that very same benchmark says my PC can evaluate a mere 17.2 million chess positions per second.

17,200,000

Ten years, four times faster. Not bad! Part of that is I went from dual to quad core, and these chess calculations scale almost linearly with the number of cores. An eight core CPU, no longer particularly exotic, could probably achieve ~28 million on this benchmark today.

28,000,000

I am not sure the scaling is exactly linear, but it’s fair to say that even now, twenty years later, a modern 8 core CPU is still about an order of magnitude slower at the brute force task of evaluating chess positions than what Deep Blue’s specialized chess hardware achieved in 1997.

But here’s the thing: none of that speedy brute forcing matters today. Greatly improved chess programs running on mere handheld devices can perform beyond grandmaster level.

In 2009 a chess engine running on slower hardware, a 528 MHz HTC Touch HD mobile phone running Pocket Fritz 4 reached the grandmaster level – it won a category 6 tournament with a performance rating of 2898. Pocket Fritz 4 searches fewer than 20,000 positions per second. This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.

As far as chess goes, despite what I so optimistically thought in 2006, it’s been game over for humans for quite a few years now. The best computer chess programs, vastly more efficient than Deep Blue, combined with modern CPUs which are now finally within an order of magnitude of what Deep Blue’s specialized chess hardware could deliver, play at levels way beyond what humans can achieve.

Chess: ruined forever. Thanks, computers. You jerks.

Despite this resounding defeat, there was still hope for humans in the game of Go. The number of possible moves, or “problem space”, of Go is estimated to be 10170:

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

Remember that Chess had a mere fifty zeroes there? Go has more possible moves than there are atoms in the universe.

Wrap your face around that one.

Deep Blue was a statement about the inevitability of eventually being able to brute force your way around a difficult problem with the constant wind of Moore’s Law at your back. If Chess is the quintessential European game, Go is the quintessential Asian game. Go requires a completely different strategy. Go means wrestling with a problem that is essentially impossible for computers to solve in any traditional way.

A simple material evaluation for chess works well – each type of piece is given a value, and each player receives a score depending on his/her remaining pieces. The player with the higher score is deemed to be ‘winning’ at that stage of the game.

However, Chess programmers innocently asking Go players for an evaluation function would be met with disbelief! No such simple evaluation exists. Since there is only a single type of piece, only the number each player has on the board could be used for a simple material heuristic, and there is almost no discernible correlation between the number of stones on the board and what the end result of the game will be.

Analysis of a problem this hard, with brute force completely off the table, is colloquially called “AI”, though that term is a bit of a stretch to me. I prefer to think of it as building systems that can learn from experience, aka machine learning. Here’s a talk which covers DeepMind learning to play classic Atari 2600 videogames. (Jump to the 10 minute mark to see what I mean.)

As impressive as this is – and it truly is – bear in mind that games as simple as Pac-Man still remain far beyond the grasp of Deep Mind. But what happens when you point a system like that at the game of Go?

DeepMind built a system, AlphaGo, designed to see how far they could get with those approaches in the game of Go. AlphaGo recently played one of the best Go players in the world, Lee Sedol, and defeated him in a stunning 4-1 display. Being the optimist that I am, I guessed that DeepMind would win one or two games, but a near total rout like this? Incredible. In the space of just 20 years, computers went from barely beating the best humans at Chess, with a problem space of 1050, to definitively beating the best humans at Go, with a problem space of 10170. How did this happen?

Well, a few things happened, but one unsung hero in this transformation is the humble video card, or GPU.

Consider this breakdown of the cost of floating point operations over time, measured in dollars per gigaflop:

1961 $8,300,000,000
1984 $42,780,000
1997 $42,000
2000 $1,300
2003 $100
2007 $52
2011 $1.80
2012 $0.73
2013 $0.22
2015 $0.08

What’s not clear in this table is that after 2007, all the big advances in FLOPS came from gaming video cards designed for high speed real time 3D rendering, and as an incredibly beneficial side effect, they also turn out to be crazily fast at machine learning tasks.

The Google Brain project had just achieved amazing results — it learned to recognize cats and people by watching movies on YouTube. But it required 2,000 CPUs in servers powered and cooled in one of Google’s giant data centers. Few have computers of this scale. Enter NVIDIA and the GPU. Bryan Catanzaro in NVIDIA Research teamed with Andrew Ng’s team at Stanford to use GPUs for deep learning. As it turned out, 12 NVIDIA GPUs could deliver the deep-learning performance of 2,000 CPUs.

Let’s consider a related case of highly parallel computation. How much faster is a GPU at password hashing?

Radeon 7970 8213.6 M c/s
6-core AMD CPU 52.9 M c/s

Only 155 times faster right out of the gate. No big deal. On top of that, CPU performance has largely stalled in the last decade. While more and more cores are placed on each die, which is great when the problems are parallelizable – as they definitely are in this case – the actual performance improvement of any individual core over the last 5 to 10 years is rather modest.

But GPUs are still doubling in performance every few years. Consider password hash cracking expressed in the rate of hashes per second:

GTX 295 2009 25k
GTX 690 2012 54k
GTX 780 Ti 2013 100k
GTX 980 Ti 2015 240k

The latter video card is the one in my machine right now. It’s likely the next major revision from Nvidia, due later this year, will double these rates again.

(While I’m at it, I’d like to emphasize how much it sucks to be an 8 character password in today’s world. If your password is only 8 characters, that’s perilously close to no password at all. That’s also why why your password is (probably) too damn short. In fact, we just raised the minimum allowed password length on Discourse to 10 characters, because annoying password complexity rules are much less effective in reality than simply requiring longer passwords.)

Distributed AlphaGo used 1202 CPUs and 176 GPUs. While that doesn’t sound like much, consider that as we’ve seen, each GPU can be up to 150 times faster at processing these kinds of highly parallel datasets — so those 176 GPUs were the equivalent of adding ~26,400 CPUs to the task. Or more!

Even if you don’t care about video games, they happen to have a profound accidental impact on machine learning improvements. Every time you see a new video card release, don’t think “slightly nicer looking games” think “wow, hash cracking and AI just got 2× faster … again!”

I’m certainly not making the same mistake I did when looking at Chess in 2006. (And in my defense, I totally did not see the era of GPUs as essential machine learning aids coming, even though I am a gamer.) If AlphaGo was intimidating today, having soundly beaten the best human Go player in the world, it’ll be no contest after a few more years of GPUs doubling and redoubling their speeds again.

AlphaGo, broadly speaking, is the culmination of two very important trends in computing:

  1. Huge increases in parallel processing power driven by consumer GPUs and videogames, which started in 2007. So if you’re a gamer, congratulations! You’re part of the problem-slash-solution.

  2. We’re beginning to build sophisticated (and combined) algorithmic approaches for entirely new problem spaces that are far too vast to even begin being solved by brute force methods alone. And these approaches clearly work, insofar as they mastered one of the hardest games in the world, one that many thought humans would never be defeated in.

Great. Another game ruined forever by computers. Jerks.

Based on our experience with Chess, and now Go, we know that computers will continue to beat us at virtually every game we play, in the same way that dolphins will always swim faster than we do. But what if that very same human mind was capable of not only building the dolphin, but continually refining it until they arrived at the world’s fastest minnow? Where Deep Blue was the more or less inevitable end result of brute force computation, AlphaGo is the beginning of a whole new era of sophisticated problem solving against far more enormous problems. AlphaGo’s victory is not a defeat of the human mind, but its greatest triumph.

(If you’d like to learn more about the powerful intersection of sophisticated machine learning algorithms and your GPU, read this excellent summary of AlphaGo and then download the DeepMind Atari learner and try it yourself.)

[advertisement] Find a better job the Stack Overflow way – what you need when you need it, no spam, and no scams.
Categories: Others, Programming Tags: