Me and startup companies

It is no surprise to the readers that I am interested in startups and entrepreneurship. The only people who read this blog know me or are related.

I left a big company ‘X’ to join a smaller company. They called themselves a startup, but not really. The current gig is well funded, if not completely around the corner of profitability. At over 150 people, it does not feel much like a startup.

I have met with friends over lunches and breakfast for years to discuss ideas, new companies in the news, and the entire philosophy of startups. Fun stuff to talk about. We all have a dream (I have watched “Tangled” about 15 times with the kids).

My friend Frank said you have to be a special sort of crazy to found a company. You have to be smart enough to come up with a good idea that can be implemented, yet at the same time dumb enough to think you can actually do it all.

Another friend and I applied to the TechStars program in 2009. David Cohen was kind enough to send us a few emails. We did not have enough traction to get in on just our idea. Still, it was a good idea. I was excited to hear about Chris Piekarski in the TechStar NYC class. I can say I knew him when he was green and just out of school. With a day job and kids at home, TechStars is not the best fit for me.

I have not written about any of it here, because this is for work. I write about technical stuff, mostly to have some proof that I can type words. That is important in a tight job market. Remember, always keep an eye open for the next big thing. You never know when they will move your cheese. So this is the first post under the heading of entrepreneur.

What changed? There is another summer program called the Founder Institute. I made a short video and sent in my application. They asked me to take their test. It was a standard personality test and a pattern matching game. They let me in. When the universe gives you an opportunity, you have to take it, right? What does this mean to the day job, my free time, and my family? At this point we don’t know.

As of now, I am not sure how much blogging I can do about the course work and other activities. The course does not start until May. Right now I am working on the idea, trying to test and refine things. If I get more than two comments I will post my application video. It is embedded software and sort of multicore related.

If you are an embedded developer and willing to help, let me know in the comments. I need to talk to engineers and pick their brains.

Asynchronous Communication

I just rediscovered this post from Oreilly about asynchronous communications. http://radar.oreilly.com/2010/10/dancing-out-of-time-thoughts-o.html Take a moment to read, I will wait.

It brought to mind another recent article about how multicore programmers must think differently.  The author is arguing that “different” thinking is needed, not that multicore is harder than regular programming.

What relates these two things? Asynchronous communication is multicore.

People attracted to programming are by nature and by training linear thinkers. Step on leads to step two and three, so on to the end. Sequential programs are drummed into our heads through all our college years.

Unfortunately, I am not a linear thinker. I am a leaper. Step one leads to B which jumps to crocodiles, and that means the answer is 7, done. Being a leaper in a linear world is a double curse. It is hard to communicate with the linear thinkers, because either I have skipped three steps and already arrived at my answer, or I do not follow their steps, so can not see the big picture without more detail. Plus, as a programmer, taking to the non-technical is always a bit iffy.

Understanding that the world may not happen step by step is the first step to understanding multicore. Process A may finish before B, sometimes, but not always. The messages may get swapped in transit. We can put things back in order, or find a way that order does not matter.

It is an asynchronous world, and linear thinkers better get used to it.

The myth of the small processor

It is great to be an embedded developer. We are drowning in riches. The job market is good despite the economy (knock wood), and the world we live in is rich with new parts and fun things to do with them.

My recent work is a departure from the multicore and bigger processors I worked with in my past. I have to go way back over a decade since I last worked on an 8051 or 6805. (If you don’t know what those numbers mean, don’t worry about it. ) Recently I have been working with smaller Atmel AVR microcontrollers.

Once again the great myth of small processors has run over me like a truck. Many coders believe the myth. An because they believe it, it continues, sort of like an evil Moores Law.

A good design won’t fit on a small processor. It is wrong, a myth, and it needs to be busted.

Small processors have less memory, less code space and more limitations. That is not an excuse. Actually, it is an excuse, for poor coding and bad design.

So, what gets broken and how do you fix it?

First, fix the state. All programs have states. The person writing the code may think they don’t have state, then the state gets hidden. Much better to put it out into the open where it can be seen. If there is a flag that gets set, that is state. Someplace in the code has a do this before you do that, but don’t do something before the first two, that is state. In a small system these things become globals and flags. That is wrong. Don’t add a flag, build a state machine.

There can be only one. It is good for states too. Flags are considered harmful. Bit wise flags in a global are considered evil! A set of bits in a global variable do not represent states. Every possible combination of flags in the set of bits represents the states. Are you ready to test every one of them?

I recently looked at some code that used flags. All through the code were statements like (if this flag do x). For a tiny microcontroller this causes code bloat and uses up the available memory fast. Much better to build the simple switch state machine with one variable. Don’t set a flag, add a state. Too many states? Not really.

Say the code requires the security message before the 25 other messages. OK, so we add the SECURITY_MSG_RX flag that gets set when it is received and validated. Now every other message handler has to check if(SECURITY_MSG_RX == (flags & SECURITY_MSG_RX)) (yes I do write my bit wise tests like that). No, you say, put the check in the receive handler, not every message handler. Except the security message has to get past the checking. One exception, not a problem. No add two more flags, rinse and repeat. You quickly have a mess.

The better solution is to make a state variable. Call it recvState for this paragraph. Now, set the state to RECV_STATE_WAIT_FOR_SECURITY. That one state checks for the security message and dumps anything else. Don’t move to the next state until secure. All states after can assume the security is valid, no need to keep rechecking.

A nice way to handle the state changes in add a setNewState(_stateType) call. You did make your states a typedef, right? Grep for all calls to the module private (static) call to setNewState() and every single state transition is visible. Add a last state variable and a log message that reads “Changing state from STATE_ONE to STATE_TWO” and life is good.

Another piece of code I could not understand had a buf[BUFF_LEN * 2]. Then all the code checked on BUFF_LEN. Why? If I had to guess, one of the checks was wrong, or misplaced, so sometimes the buffer over flowed. This is a bad fix for bad code that hides problems. Please make the code correct, and then there is no reason to waste space.

One last strategy that helps save more space but I never ever see. Write programs that handle tables. This is not my idea. It comes from Code Complete by Steve McConnell.

Every possible place, build a table. Write code to go through the table, then handle only the parts that are different. This is the object oriented concept of programming only the difference. Once the table handling and common case code is working and tested, adding more lines to the table is easy. For an embedded system declare the tables to be const, static const if possible. The const key word should (compilers vary so take care) keep the table in the program space, not in RAM. Don’t put variables in the table. The resulting code will be much smaller, easier to test and expandable.

This works great for networking. Tables of messages are easy to implement and save code space over a switch on the message ID. Extra points to anyone who comments with the way to find the table size as a constant at build time, in C.

If there are worries about traversing the list, put the list in order and use a simple binary search. For 10 items it is not worth the effort, but every little bit helps.

Here are just a couple of tips that can make life with a small system happy and worth while. Please comment if you have others.

100 Different cores – The minimal risk machine and multiple cores

Frank made a comment to my last post. In my usual totally tangential way, it reminded me of the Minimal RISC (Reduced Instruction Set Computer) machine.

http://en.wikipedia.org/wiki/One_instruction_set_computer
http://www.cs.uiowa.edu/~jones/arch/risc/

When I was a boy, taking Computer Architecture from Dr. Chang many years ago, the RISC vs. CISC was a real debate. The paper above is for the minimal RISC machine.
The idea is the computer has one instruction. That instruction is move memory A to memory B, done. The computer can do anything with that one instruction. Adding, multiply, everything is in the memory map. To add 2+3 the computer moves 2 into the adder slot A, then moves 3 into the adder slot B, then gets the result from adder slot C. Easy.
The idea of 1000 of these super simple cores is fun to play with. next to the cores would be banks of adders, multipliers, dividers, or whatever is needed.
This makes some sense from a resource point of view. Computers tend to add, subtract and compare more than divide. Things that take a long time in an algorithm like multiplication, could be run in parallel. Start multiplier A, then B, then C, get result A, then B, then do an add and fetch C. Cool.
Just something to noodle on.