Category Archives: programming

Codecademy scores 10 million and Familycoding gets a nice shout out…

So six months ago Ben and I signed up for Code Year, with Michael Bloomberg and about 400,000 people.  We didn’t even really know what coding was.  Codecademy was just five month old puppy of a Start-up.

And look at us now.   Here’s Codecademy announcing $10 million in their second round of venture funding (a big chunk of that from Richard Branson).  But notice who’s mentioned in the list of accomplished students.  Yes that’s us,  Juliet Waters and her son Ben.

I haven’t been blogging as much lately because I’m deep into Human Computer Interaction, a free five week online course given by Stanford through Coursera, another high quality free education startup.   There I’m designing my first web/app and getting rigorously vetted by my online peers.  But I’m still keeping up with my Code Year.  More than ever, I’m going to need all that JavaScript to get it functional.

If that weren’t keeping me busy enough, last night I went to a first meeting of organizers of  Montreal’s inaugural  Mini Maker Faire, which will be at the Olympic Stadium August 25-26.

One day soon, I will come up for air and do a nice long blog post.

In the meantime here’s MythBuster’s Adam Savage talking about the importance of taking risks….

Advertisement

The Women


Rear Admiral Grace Hopper, surrounded by her team of programmers

One winter Augusta Ada Byron King, Countess of Lovelace, became obsessed with a puzzle that had become popular in the circles of Victorian aristocracy. Peg Solitaire starts with thirty-two pegs arranged on a board in the shape of a cross around a central, empty space. The goal is to jump over adjacent pegs, which are then removed, until only one peg remains.

Ada Lovelace was the only legitimate child of Lord Byron, the rock star poet of the Romantic Movement. A bitter divorce meant that Ada never met her father.  As an eccentric antidote to what her mother, Annabella Millbank, baroness Wentworth, perceived as an insanity rooted in a talent for poetry, it was arranged that Ada be tutored from an early age by some of the era’s great mathematicians and scientists.

At the age of seventeen, she met Charles Babbage, creator of  the first computer prototype. From the questions she asked about his “Thinking Machine,” Babbage could tell Ada was a better mathematician than most of the university graduates he knew. They developed a collaborative correspondence that would last the rest of their lives.

Ada’s winter of peg solitaire produced an inspiration and she wrote to Babbage: “I have done it by trying & observation & can now do it at any time, but I want to know if the problem admits of being put into a mathematical Formula, & solved in this manner …. There must be a definite principle, a compound I imagine of numerical & geometrical properties, on which the solution depends, & which can be put into symbolic language.”

From that point on she put her talent and education towards understanding this symbolic language, and wrote what is now regarded in the history of computer science as the first recursive algorithm.

“The Analytical Engine does not occupy common ground with mere “calculating machines.” It holds a position wholly its own. . . A new, a vast, and a powerful language is developed . . . in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible. Thus not only the mental and the material, but the theoretical and the practical in the mathematical world, are brought into more intimate and effective connexion with each other…We may say most aptly, that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.”

This is why if you take Stanford’s online course Introduction To Computer Science: Programming Methodology you will learn from its charismatic professor Mehran Sahami that Ada Byron is considered the first computer programmer. If you signed up last week for Stanford’s five week Human Computer Interaction studio course, offered free through Coursera, you would have learned from associate professor Scott Klemmer about Rear Admiral Grace Hopper, the inventor of the first compiler.  Hopper  is not only credited with the word “debugging”, after a moth was discovered in the lab, she conceptualized machine independent languages and oversaw the team that invented COBOL.

If you don’t have time to take a free Stanford course, at least read this digest of Stanford talk by CS historian Nathan Ensmenger. Talking about his book The Computer Boys Take Over: Computer Programmers and the Politics of Technical Expertise, Ensmenger explained how the world of computer programming was once so dominated by women, that it was stereotyped as a female profession. In the early 1940s the University of Pennsylvania hired six women to work its ENIAC machine, generally considered one of the first computers. The “ENIAC girls” are considered the first computer programmers in the U.S.   When Cosmopolitan interviewed Hopper in 1967, she explained why it was such a particularly good career choice. Programming she explained was “just like planning a dinner. You have to plan ahead and schedule everything so that it’s ready when you need it…. Women are ‘naturals’ at computer programming.”

What happened? A job shortage in the 60s resulted in the equivalent of an affirmative action program to make the profession more appealing to men. Newly created professional associations actively discouraged the hiring of women. Computer industry campaigns linked women to error. Programming aptitude tests, the results of which were widely available in fraternities and Elks lodges, were introduced to further advance the prospects of men and set barriers up for women. The ongoing job shortage, however, meant that women continued to be hired. By 1985, women still represented 37% of computer science graduates. That was the year that Radia Perlman invented the spanning-tree (STP) protocol. Because STP is so fundamental to building computer network bridges, Perlman has been called “The Mother Of The Internet.”

Currently women represent 18% of computer science graduates in the United States.

I knew none of this when I signed up for Codecademy’s Code Year challenge, in January. But by June 2, when the New York Times ran an article on a highly publicized sexual harassment case in Silicon Valley, I knew enough to balk at the lede: “MEN invented the Internet. And not just any men. Men with pocket protectors. Men who idolized Mr. Spock and cried when Steve Jobs died. Nerds. Geeks. Give them their due. Without men, we would never know what our friends were doing five minutes ago.”

Fortunately, at least one other woman did more than balk:
“What a steaming turd of an opening line in David Streitfeld’s otherwise serviceable New York Times piece about the Ellen Pao/Kleiner Perkins sexual harassment lawsuit, and gender discrimination in Silicon Valley” Xeni Jardin blogged in Boing, Boing. When she tweeted her post she was s greeted with enough Hell, yeah’s that her  storification of Twitter responses reads like an instant oral history.  A history written not only by women, but by men who had learned programming from their mothers and who proudly traced their programming lineage back to grandmothers who were pioneers in the profession.

Much, perhaps too much is made, about the need to find ways to “attract” women into the field of computer science. How about we re-frame this as a restoration of the place of women in computer science?

Let’s go a step further. Let’s restore it as a place that is welcoming to the average citizen. Nothing against geeks, I consider myself one, and have no shame about that. But it’s time for computer science to stop pretending this is a skill that can only be learned by boy geniuses.

There will always be a place for boy geniuses, and a need for programmers both men and women with advanced math skills. But more natural, user friendly languages and tools are being invented every year to make basic programming skills more accessible to children and adults of any age: from MIT’s ingenious Scratch to last week’s release of Blockly, Google’s first visual programming language.

The time has come for everyone to occupy the world of information science. It doesn’t matter whether people choose that world as a career, a leisure time obsession, for one month, one year, one winter, or hopefully, this summer. It doesn’t matter whether people start it at Stanford, Codecademy, or Code Hero (a role playing video game that aims to teach code literacy under the mentorship of Babbage, Lovelace and Alan Turing.) It doesn’t matter when or why we learn to code. What matters is that a critical mass of people start somewhere so that we can reverse, or at least buffer, a growing trend towards techno-elitism.

To use the three important words that have been used by mother coders since the dawn of time, before the invention of computers, and if all goes well, for millennia to come.

Just try it.

Things Being More Equal Than Others

It will soon be six months since I started my Code Year pledge with codecademy.com. I’m still going strong. I’ve even started beta testing a few courses ahead of time.  But this doesn’t mean that learning to program has been easy.

All my new learning is at the fresh cement stage. If I don’t take stock while I can still see the rocky road behind me, I become useless to the people still on it. So, I’ve decided this would be a good time to write about one of my biggest stumbling blocks coming out of the gate.

It was that damn = sign, and the subtle, but really important ways that this sign is different in imperative programming than it is in arithmetic and algebra.

For those of us who never continued with math beyond high school,  = has a pretty rigid meaning. It means “the same as”.  Things on each side of it evaluate as the same.  Sure, we understand that the value of a variable can change.  If  x = y + 1 in one algebra exercise,  we accept  x = 2y + 1 in the next one.  But essentially, what isn’t supposed to change is that both things on each side of that symbol have the same value.

In JavaScript, however,  = means something more like “attached to”.  Or “associated with” or “same type” or “contains all of these things” or is the same as “for a limited time only!”,  depending on the context in which it is being used.

Much of  coding is  building quickie archives of associations, archives that can just as quickly be dismantled. So programming needs an equal sign to have a much broader, less sticky meaning than it does in math.  In math the equal sign is like glue.  In programming it’s more like a post-it note.

For instance  x = 0 used in a programming algorithm usually does not really mean x is equal to 0.  It’s a way of saying that x is a number and  it will be starting at 0. So if we put x in a standard programming loop like (x = 0; x < 10; x++)  it means that x’s value is going to increase in numerical value by one, each of the 10  times we run that loop.

If we write x = ”  ” then  what we’re saying is that x is a string, i.e. some kind of phrase,  which usually means x will be used as a container for whatever words or sentences we want to plug into x.

If we want to make x stand for a particular series of actions,  we turn it into a function by writing  x = function (). That series of actions will be repeated every time we write x().

X can also be an array, a list of things, as in x = [1, train, $, 104, poodle].

In programming if you want to convey that something is actually equal  in the way normal people understand equal, you add an extra =, or just to be safe two extra equal signs, x === y.  This gives x what is called a “Boolean” value, i.e.  the variable either is or isn’t exactly this thing. For example:

 if (x === 3) {do this thing};

in this case  x has to be 3  for the action in  between the curly brackets to be executed.

if  (x !==3){do this thing};

means  do this thing only if x isn’t 3.

Write:  if (x=3) {do this thing},  and the computer will spaz out because your definition of  x is too vague, so it doesn’t know what to do.

****

If you learn to program with a bright sixth grader, as I did,  you may find that they grasp this floaty = concept much faster than you do.

Sixth graders don’t have to unlearn the = sign because they’ve just started learning algebra. Their brain has just freshly opened to the fact that an equal sign can be used in more interesting ways than previously known.

If your sixth grader is anything like my sixth grader, he or she  may very well kick your ass in the first twenty hours  of programming, as you stumble again and again  over whether that variable is the “same as “ or “sort of like” something, and hurt your brain further,  trying to figure out why it’s attached to that meaning in one place of the algorithm, but not in another place.

Even when I understood the difference theoretically, my brain kept reading the sign badly again and again. It was like that Stroop Test,  where someone shows you the word BLUE written in green ink.  When they ask you the color of the ink,  you keep saying blue because your brain prioritizes the language definition over the visual.  My brain was clamped on equal being equal, even when I knew it wasn’t.

“But wait!”, you and an unfortunate number of other educators might say.  “If we expose children too early to the more complex and nuanced programming concept of = won’t they get all confused when they learn algebra?  Don’t they need a period of time when the = sign has a more limited scope?”

You may even develop this idea further.  “What if after being exposed to all this = sign confusion, some children end up learning algebra [cue music to soundtrack from Psycho] at a slower rate. What horrible things will this do to their self esteem?  Maybe they’ll give up and refuse to learn algebra all together, in total frustration!”

This is the argument used by those  who think only really, demonstrably super smart kids should be exposed to programming in middle school.  Ideally in expensive summer coding camps reserved just for them.  And this is probably the argument that will solidify the growing gap between the technologically literate, and the now merely language literate, for much longer than it should exist.

It’s also the argument that will keep girls from mastering code as a matter of course, since they don’t tend to sign up for summer coding camp as frequently as boys, and by the time the girls are given the option of learning programming, they’ve developed a misconception about computer science as something only of interest to social isolates (var nerdyGeek = “social isolate”).

This is the same reasoning people use when they bring up studies that show  children raised in bilingual environments exhibit a significant language delay.  (Trilingual environments, they argue are even worse!)

I can only argue against this from anecdotal experience. But I will argue against it, passionately.

I’m a Montrealer, so my son, Ben, learned English at home, but went to daycare in our French speaking neighborhood.  To make matters “worse”,  I had joint custody with his father, who was born in Israel and spoke to him in Hebrew.

Indeed, this created a significant language delay, to the point where, when he was two, we had his hearing tested just to be sure.

But there was no hearing problem.  And not only was there no hearing problem, by the time Ben hit kindergarten he was reading fluently in both French and English, counting to a 1,000 and already starting to grasp a little multiplication.  Because by then, his brain was a language learning machine.

Ben’s not a genius (he’s been WISC tested. Apparently he’s at the high end of average).  He’s just a smart kid whose brain now codes information a little faster than normal kids because he spent his early years in an information rich environment where there was a lot more meaning to sort out.

If your child maintains a coding practice, even if it does cause a little confusion at first,  it’s a good guess he or she will not be falling behind on the math curve for long.  In fact, before you know it they will probably be three times as equal as the other kids.

Or if you want to contemplate a really scary scenario [Psycho refrain] they will probably become three times as equal as you.

Three Ways Learning to Code Would Make Michael Bloomberg A Better Mayor

Earlier this year, this post was included in Should You Learn To Code, a collection of posts put together by Hyperink Press.

Thanks to Jeff Atwood’s provocative column Please Don’t Learn To Code, the debate about whether or not the average person should learn to code rages on.  The Wall Street Journal weighed in yesterday with this Atwood  quote:

To those who argue programming is an essential skill we should be teaching our children, right up there with reading, writing, and arithmetic: can you explain to me how Michael Bloomberg would be better at his day to day job of leading the largest city in the USA if he woke up one morning as a crack Java coder? It is obvious to me how being a skilled reader, a skilled writer, and at least high school level math are fundamental to performing the job of a politician. Or at any job, for that matter. But understanding variables and functions, pointers and recursion? I can’t see it.

I’m not a crack Java coder, or anywhere close.  But even after five months of programming lessons I feel that I can confidently come up with at least three ways that Michael Bloomberg would become a better mayor without even becoming a crack coder.  In fact, he could remain a crap coder, and probably still come out of the experience as a better mayor.

1. He might learn just enough about programming to start considering all the different kinds of operating systems there are now.  Maybe he starts having daydreams about switching to Linux, and starts thinking about all the ways a thriving metropolis like NYC might save money from switching from Windows to Ubuntu.  Probably he doesn’t, but he instructs a few minions to at least start researching more open source software that the city could use.  Every once in a while he starts nagging his education department to see how they could improve school budgets and efficiency by using open source where appropriate.

2. He finds himself walking into a meeting and without realizing it, thinking about problems in a totally different way. Instead of spending hours debating all kinds of solutions he asks himself and the people around him: “what is the smallest, most significant, repeatable action we could take right now to solve this problem?”  A few months of coding has nudged his brain in a different direction and before he knows it, he’s cutting through hours of wasted time with more creative and efficient solutions.

3. He’s still having those switching to ubuntu fantasies. Oh, he’s too old.  But what the hey, he decides to send every child in NYC a RaspberryPi, the $25 dollar, credit-card sized, Linux computer that has just started shipping out of London.  Instead of wasting hours playing video games some of these kids learn how to make their own damn games. One day a critical mass of those kids grows up to become crack coders and change the world in ways we can hardly imagine.

So there Jeff Atwood.  You asked, I’ve explained it to you.

Now can everyone just get back to their codecademy lessons in peace!

RosiePy: 12 year old programmer from U.K

I firmly believe that 10-12 years old is the best time to start learning how to program.  But I just found out that it’s also an awesome time to starting teaching programming.  I don’t know much about RosiPy, yet.  I only discovered her yesterday when she liked my post on parent programmers.  I wouldn’t be suprised if I start hearing more about her.

She’s a twelve year old girl living in the U.K. who started a youtube channel a couple of weeks ago, teaching kids how to program Scratch.

Here you go, check it out for yourself.  Ben loved it, and who knows maybe it’ll inspire him to start teaching a little JavaScript sometime soon.

RosiePy’s blog.

The Parent Developer

I’ve actually been coding for a long time, without realizing it.

If we remove all the syntax of computer language and look at what the bare bones of coding is, it’s just using logic, reason and simple commands to create repeatable behaviours.

This is what parents do with children.

They start with small instructions,  baby steps and repeated routines,  appropriate to both the child’s abilities and the parent’s still developing skills as a programmer of babies.  Then as the child  starts to develop cognitive abilities, the parent sets up a system of conditionals: acceptable choices the child can make that will not include choices that will  bugger up their lives.

Figuring this out is a frustrating challenge, but it will probably work well enough while the child is still not much more than a new Object in the parent’s mind, something that in theory should inherit  all her workable (and perhaps not as workable as she’d like) methods.

But at a certain point the child hits  the age where he now has the abstraction abilities and the independence  to start programming his own life.  And this is where the real problems start, because the parent is  no longer the programmer with a child Object.  The parent is now dealing with a junior developer.  And if the  parent does not know how to establish her position as senior developer, there will be blood.

That’s why I think this is such a great time for Ben and I to learn how to code.  Because even if no one in the family ever becomes a professional programmer, we’re still regularly working together on solving problems with commands and the kind of simplification skills that  inevitably spill into our lives.  Ideally this will help us solve problems in ways that are more neutral and productive than what usually happens between adults and teenagers.

Obviously Ben will not stay a junior developer in this family for long.  This is the law of life and technology. Coders move on. But for now it’s still my responsibility to instill good thinking, writing, and commanding habits.

It’s all about those transferable skills.

Understanding The Password

Last week my eleven-year-old son’s yahoo mail account was hacked.  Ben’s on Facebook and doesn’t e-mail very much, so fortunately his list of contacts was small.  Because he doesn’t have any bank accounts or credit cards, I guess I’ve procrastinated explaining to him the importance of having a reasonably complex password you change often

If you’re a parent who hasn’t done this yet.  Do it now. Believe me, you do not ever want to see to see the look on an eleven year old boy’s face when he discovers his identity attached to thousands of e-mails offering to introduce people around the world to his “beautiful lady friends.”

One thing I discovered this week, however, is that I am a significantly different parent after three and a half months of coding lessons.

If this had happened last year, I probably would have shared my son’s sudden picture of the world as a cryptic, chaotic place filled with evil geniuses programming bots that inexplicably burrow their way into your private, vulnerable data.  I would have soldered up the parental controls, because that’s all I would have known to do, and I probably would have done everything I could to protect my son’s innocence and to continue doing it for as long as possible.

But because of our family Code Year pledge, I decided, instead, this would be a good week to review what we’d learned about randomization programs. From what we already knew, it was easy to see how someone with just elementary programming skills could spout out enough random letters or numbers in under an hour to crack the accounts of people who still believed that they could easily protect their data with a simple memorable word or a birthdate.

It was also easy to imagine how some of these simple hacking bots were being created by teenagers in poorer countries, taking advantage of the fact that they’re learning core-programming skills that are not currently being taught as a part of a standard high school education in richer countries.

I’m not talking about skills fundamental to a computer science degree.  I’m talking about skills so basic they can be learned by a middle aged mother and her eleven year old.

I didn’t have to go past week three, conditionals, to find a simple randomization project we could adapt to create our own superSecurePasswordCreatorBot.

We used the dice-throwing project. This elementary program throws a virtual set of dice to create a random score.  With just a few more lines of code, we could replace the numbers on each side of the die with six meaningful words and six meaningful double digits.  Call the program and we have a randomized, but memorable password that can be changed weekly. Add a third die, and we have a SuperDooperSecurePasswordCreator.  As Ben’s knowledge of Object Oriented Programming develops we can continue working on this so that we can easily and automatically store our passwords for easy retrieval.

The important thing at this moment, however, is not so much creating an impenetrable password in a family war against an army of alien grifter bots.  What’s important is that we’re engaged in a productive learning curve, not huddling together into some increasingly tiny information gated community.

Now I’m wondering if in protecting my son’s “innocence,” what I really would have been protecting was his ignorance, and mine.

We’ve grown up in a society that thinks that teaching people how to use software is digital literacy. But in the three months since I’ve started learning core-programming skills, I am being hit with the full on obvious truth that this is pretty much the same as if we were teaching people how to read without teaching them how to write.

Computers monitor and manage every aspect of our civilization.  We would never make math something that people only started learning in university, if they showed an interest.  Why are we doing this with computer science? 

How is this different from the days where monks wrote and priests read from illuminated texts? While the masses listened enthralled.

Our progress was never dependent on how well we memorized or used the illuminated text. It was always dependent on how well we understood that text and used it to illuminate the world around us.

In the same way, our progress as a civilization is not dependent on how well we use, or even how well we make software. It’s dependent on how well we understand how it is made and how this computational thinking helps us understand the world we live in.

That’s the password to the next level.


 

Image

Humanizing Factorials

Annos-Mysterious-Multiplying-Jar

With The Tower of Hanoi, I had fun with the evil powers of recursion. But I’m not actually learning code to teach my son to become a dictator, even a benevolent one, bearing brownies.   While we’re learning recursion, it’s probably not such a bad idea to bring up the some of the consequences of creating formulas that make work and data collection efficient, but potentially dehumanizing.

A few years back a friend gave Ben a lovely book that shows both ends of the spectrum of rich creativity and mechanistic abstraction. Anno’s Mysterious Multiplying Jar was written and illustraed in 1999 by the Japanese  father and son team, Mitsumasa and Masaichiro Anno.  It tells a simple story of factorial development that starts with a jar,  large enough to contain an ocean.

In this ocean is an island and on this island are two countries:

In each country are three mountains.  On each mountain, four walled kingdoms.  In each kingdom, five villages.  In each village, six houses.  In each house, seven rooms, in each room eight cupboards.  In each cupboard, nine boxes.  In each box, ten jars like the first.

The question at the end of the story is how many jars are contained inside the jar?   The answer is ,of course,  3,628,800  a.k.a.  10 factorial or !10.

The first part of the story is filled with richly illustrated picture of villages, houses, rooms, cupboards, all with their unique, individual characteristics.  The second part retells the story with dots instead.  It goes as far as a two page spread representing !8, or 40,320 dots.   The Annos don’t venture past that, since they’re writing a children’s book, not a heavy tome full of dots.

But the point, so to say, is made.