Programming Your Own Jibo


This is Andy Atkins, VP of Engineering here at Jibo.

First off, I want to send out a big “Thank You” to all of you for the overwhelming show of support you have given Jibo since we’ve launched our crowdfunding campaign. Your contributions, questions, and the “buzz” you’ve helped create, confirms that we’re on to something here, and we want to be as transparent as we can about what we’re doing and where we are.

As I’ve been wading through the email that has come in since we’ve launched the campaign, it is clear that many of you are dying to learn more about Jibo, its applications and capabilities, privacy and security, as well as how one might develop additional “skills” for Jibo. In response, we’ll continue to update our FAQs to address as many of your shared questions as we can.

Over the coming weeks and months, I’ll also invite other members of my development team to jump on this blog and dive more deeply into specific aspects of our technology. We might cover topics like how we’re using Jibo’s hardware for sensory input, our use of behavior trees to monitor that input and drive Jibo’s expression system, our plans for the Jibo ecosystem, news regarding Jibo’s speech recognition system, and so on. And we plan to share taped demos and updates of work in progress, to give you a better sense of where we’re at. Let us know what technical topics you’d like us to cover, and we’ll try to accommodate.

There are many ways to stay connected with us. Beyond following us on Twitter and Facebook, I encourage you to keep an eye on this blog (and, of course, welcome feedback via the Comments section). And once we get closer to releasing a preliminary version of our SDK (still many months off), we’ll open the Developer Forums to facilitate more two-way communication.

Finally, you might have already noticed that we just added new stretch goals to our campaign. Of particular interest to me are the four Hackathons we hope to host in October 2015 here in the Bay Area as well as in Boston, where a small group of you will have the opportunity to work with members of my team to produce new “skills” for Jibo. We’ll award prizes for the best hacks, and along the way, get you some press exposure. These should be fun, intense coding sessions, and we’ll have more details as we get closer to the events.

For today’s blog, I’d like to cover our goals and plans for the JiboAlive Toolkit and SDK.

I’ve always enjoyed making things. As a kid, I loved my Legos, constructing hugely elaborate buildings with doors, windows, roof tiles, and the like. In college, I was fascinated with hardware and built my own alarm clock that featured a relay that could switch the power to my stereo on and off.  (Have you ever created your own printed circuits by dipping copper-clad boards in acid baths? Very cool!)

But as I indulged my curiosity, I quickly discovered that building things was not enough. I wanted to empower others to create cool things that leveraged my work. Early in my career, I led the team at Apple to create the networking APIs (application programming interfaces) on Newton that enabled an outside developer to build one of the world’s first mobile web browsers. A few years later, my team at Danger created a highly-advanced SDK (Software Development Kit) that, at the time, supported Java application development, cloud computing, and an application store for T-Mobile’s Sidekick mobile phone, predating the introduction of the iPhone.

So the common theme of my career is not just building new products, but building software, hardware (and now, cloud) platforms upon which others can add value. This is why I’m so excited about what we’re doing at Jibo. Were we “just” developing a product around Cynthia’s vision of a social robot — that would be quite revolutionary in its own right. But we’re doing more — creating a platform and ecosystem that’ll empower developers (and aspiring developers) to build applications on top of what we’re creating.

Early on, we defined two (seemingly contradictory) goals for Jibo’s developer tools:

  1. Make programming new behaviors on Jibo as easy and accessible as possible, and
  2. Support professional tools for professional developers.

To make Jibo programming as accessible as possible, we looked at what’s out on the market today. Programming for iOS or Android is certainly easier than writing embedded C/C++ code — but there is still a learning curve associated with writing Java and Objective-C, and we wanted to make programming for Jibo even easier.

Over the past several years, we’ve noticed that scripting languages like JavaScript were maturing nicely, had a much lower learning curve, and environments like Node.js really made JavaScript viable outside of a browser. Coupled with that, scripting language interpreters (like Google’s V8 and Webkit’s JavaScriptCore) were getting faster and faster on embedded devices. So much so, that my last team at Netflix had adopted JavaScriptCore as the engine that now drives almost all of the Netflix applications on smart TV’s and gaming consoles.

So we’ll be doing something similar with Jibo — using JavaScript as our application programming language, built on top of the superb environment supported by Node.js, with V8 as the underlying scripting engine.

But this still isn’t easy enough for someone just getting into programming. Last year, when my daughter was learning how to program in school, her teacher used Scratch to introduce basic programming concepts. I was particularly impressed with Scratch’s graphical interface and the fact that it was all web-based. No intimidating text files, command lines, IDEs, nor Make files; just drag and drop with a built-in simulator.

While we won’t be using Scratch for Jibo, we are borrowing the concept of a web-based graphical interface for adding behaviors to Jibo. This tool will help developers create behavior trees that define the sensory input to watch (e.g. a spoken phrase, a specific touch to Jibo’s face or body, or visual data) and drive the resulting response by Jibo (e.g. an animated movement, sound, visual, or a spoken phrase). Anyone with a modern web browser should be able to use our web tools to add new behaviors to Jibo. And we’re planning to create a web-based simulator as well, for testing out newly-constructed behavior trees before sending them to your Jibo robot for testing there.

Coupled with these web-based tools, we plan to release a library of expressions (animations, face graphics, and spoken phrases), so that you can quickly plug together components to add custom behaviors to Jibo. These web-based tools and expression libraries will comprise our JiboAlive Toolkit.

But we know that this is not enough for some of you, and so we also plan to release our full set of JavaScript APIs (as well as a plug-in for the Eclipse IDE for those who want this), so that serious developers can create full applications using our interfaces. This will become our JiboAlive SDK.

I hope this begins to give you an idea for what we’re trying to do here regarding supporting you, our developers and hobbyists. We don’t want Jibo to be a “toy” that you use for a couple of hours, then throw into your closet; we want Jibo to become a healthy platform and vibrant ecosystem that supports both casual developers who just want to “play around” with customizing Jibo, as well as professional developers who might want to add real value to the Jibo ecosystem (and, or course, sell their applications through our appstore).

As we get further along developing the JiboAlive Toolkit and SDK, we’ll come back to this blog with status, demos, and more details. So hang on tight — it’s going to be a wild and exciting ride!

ANDY ATKINS is an experienced technology executive, formerly with Netflix, Danger/Microsoft, and Apple, having delivered new hardware-software platform experiences and ecosystems in consumer technology.

60 thoughts on “Programming Your Own Jibo

  1. This is very exciting! Would you be willing to entertain the idea of supporting Flash as a character animation tool? It’s capable of outputting HTML and JavaScript representations of Timeline based animations with FrameLabels. Imagine telling your character to “goToFrameLabel(‘wave’)” to get your Jibo’s avatar to emote in a way most amateur animators can wrap their heads around.

    • Hi, We use JavaScript. Like Flash’s ActionScript, this is a ECMA-Script-derived language. So from a coding point of view the languages are similar. But of course you are interested in the graphical aspect of Flash. It would not be practical to run the Flash player on the robot for a number of reasons not the least of which is that the robot does not run an operating system supported by Adobe. I hope this answers your question.

  2. I am so psyched about being a partake in the beggining of the Jibo community and it’s growing development base. :). I think that being able to have a few cool custom interactions when you first get Jibo to know you would be pretty sweet. First question is, you had said JiboAlive is web based? Will this be available in mobile form, and secondly, when will we be able to beta? Can’t wait to get to know the team better o) !

    • @Dan, The JiboAlive Toolkit (as opposed to the SDK) will require a “modern” HTML5 browser that also supports WebGL. Latest versions of Chrome, Firefox, Safari, and IE “should” work. We’ll look at supporting mobile browsers as well, but honestly, it all comes down to testing. We may have to phase in the support. Regarding beta, that’s a number of months off. We’re probably looking at next Spring or early Summer at the earliest. We’ve got a lot of work to do before we get there, but we’ll definitely get it out there before calling it “final”.

      • Haha, when it comes to development “final” is the pastry on a string in front of you carried away by all the bugs bugs bugs and functionality, huh. I’m really looking forward to the forums opening up and helping the community grow and thrive. If you need anything tested or there’s anything I can help wwith with the campaign/development/anything Jibo related, I’m just an Email away. Full support and good vibes to the whole team. Jibo on! •)

  3. Will I be able to send data from JIBO to other devices, such as commands toother computers, smartphones, raspberry pie or microcomputers such as Arduino to controll electrical tools, e.g. if I want to build legs to JIBO so he can move around, I might need to send commands to the computer controlling the legs, or to the motors.

    • It could be very interesting to have Jibo interact with custom hardware! I’m also interested on this possibility and from what I’ve read so far, it seems that Bluetooth will be the way to go. Hope someone from the staff can talk us more about this!

    • @Andreas, we are looking into the possibility of adding an accessory port (perhaps serial or USB) to the back of Jibo’s neck. Still undecided. But remember that Jibo will have a WiFi/BT card in it, so if your peripheral talks TCP, it can communicate with Jibo. Bluetooth software support will probably come on line later in 2016.

    • Okay, a very late reply, but: Great idea! I can imagine Jibo looking like a small centaur, attached to one end of a four-limbed chassis (the battery pack providing the opposing ballast)… or any number of other body configurations or “mobile platforms” as the situation/user requires.

  4. My comment dissapeard, but is it possible to send out data from JIBO? To other apps in smartphones, computers, and send commands to microprocessors? I was planning to mount him on a hexapod using an Arduino or Raspberry pie, or something. Love

  5. Thank you for writing this post! It’s really interesting to see some first impressions of what’s to come with Jibo. Like you, I’m interested in the ways we can make technology that can be “friendly” to everyone: Easy to understand, use and even improve.

    Right now, I’ve had experience in C/C++ and ASM for embedded systems, so it’s quite a surprise to me to see this robot will be programmed in Java. It’s nice, something new to learn, so I’ll start learning today to be ready!

    I think that InteractiveNYC had an interesting take on how can we program character animations into Jibo, but I think that, for a robot like him, an interface with a series of “sliders”, each one controlling a different motor would be great. We could play with the sliders while a 3D model reflects how each slider would change the robot’s “pose”, and when we are happy with it, save it as a keyframe. Then repeat the process until we have a complete animation!

    I have a question: Will there be support for offline tools? Cloud support and web-based applications are great, but I’m afraid that I would lose an important feature if Jibo gets into a place with no internet connections available.

    • Hi, I’m sure Andy will reply more. But I just wanted to point out that the robot uses JavaScript, which is different from Java. The robot has C++ code for some of the lower level functionality. Like many modern devices we use node.js for creating the application layer code. I’ve been writing code that runs on the robot for the last several months and I can assure you it’s quite fun and very powerful.

      • Hey, Rich! Thanks for the explanation, it helps me dispel the notion I had. During college, all my professors and most students had an “anti-Java” (or anything that sounds Java-ish) stance, arguing that it wasn’t optimized, or powerful enough, or flexible enough for embedded applications… So I never had experimented Java or JavaScript before; but it looks the time is now!

    • @Vexeluis, As Rich said, we’ve adopted JavaScript rather than Java for our interface language (hopefully making it even more accessible). Re: offline: you could always program in JavaScript (and even use an IDE like Eclipse), obviating the need for using some of the web-based tools. Moreover, Jibo itself may have some subset of functionality in offline mode. But I have to confess, offline mode is not a high priority for us. We’re big believers in the “internet of things”. Just as your iOS or Android mobile phone can operate offline, it becomes much less interesting and capable. I’m sure the same will be for Jibo.

      • I agree. However, it would be nice to persist some data locally especially for critical notifications such as medication reminders (date-time/medication/dose) to ensure connectivity issues don’t torpedo functionality. Need to look at Jibo’s specs again but I assume that this will be possible.

      • @Dennis, One of the things Andy mentions in his blog is that we use JavaScript in the context of node.js. Andy mentions this so that people can familiarize themselves with the context of how JavaScript is executed within node,js. It is different from how JavaScript is used within the context of a web browser. I would also encourage you to take note of the expression “the truth is in the cloud.” You mention torpedo in your comment. Turn that torpedo around and ask what would happen if a given Jibo unit were hit by a torpedo. As product designers we have to consider both situations!

    • I think it would be nice if you could make animations in CAD software (e.g. Blender) and use them for JIBO via a plug-in or something.

  6. I’m liking what I am hearing but I too came from college with a wealth of programing knowledge but very little in Java anything. Over the years since then most of my day to day work is java based. What I’m interested in knowing is the first model your coming out with be theathered in some way or stand alone based on it’s wifi and what will it’s internal memory be in size to retain all of it’s learning script. Um I guess I should of asked that first….
    Does the jibo have the ability to learn new things on the fly like people within the home or will I have to input several images of each person in the home naming each profile and making voice logs so the jibo knows who is who or will it be seemless. I’m a programmer and I can see this being amazing or a paper weight so I’m just trying to understand the built in fuctions as per the video I saw on facebook.

    • Hi Jason, thank you for your interest in Jibo! As you may know we are still in the process of actively developing Jibo. We’re engaged in a crowd funding campaign where Jibos are being offered as perks to that campaign. It’s premature for us to talk about specific implementation details. I do want to point out that JavaScript is quite different from Java. Per our FAQ on, Jibo requires a WiFi connection.

  7. You guys mentioned an online simulator to test code in software before trying it out on the real robot. Is there any chance that the simulator will be out earlier than the actual hardware?
    Also, any integration with ROS coming up in the near future?

  8. Happy with JavaScript. Is there any way to get started early? I can’t find widen SDK is available (like a date to start learning even iif before my Jibo arrives?

    • Hi Scott, We love your enthusiasm! We’re not ready to release the SDK yet. As Andy notes in his blog, the system runs JavaScript in a node.js context. It may be helpful to make sure you are familiar with this powerful platform.

  9. Hi Andy,

    As a non-programmer, I’d be interested in what route you would suggest to learn js? I’ve played with Scratch a bit with my son, and that seems pretty easy to use. I am technically minded, but my skills are in network infrastructure, so I haven’t dug into programming much. Do you have any suggestions for specific books or websites etc? I can obviously Google this, but with the sheer volume of stuff out there, I thought I would be better to ask an expert! I’m pretty stoked at the thought of customising Jibo, but don’t know where to start!

  10. Where will our code run? On the Jibo microcontroller? On MyJibo servers on the cloud? Both? Will there be a way we can run code on our own servers that calls the MyJibo cloud just to interface to the Jibo?

    • Hi Chas, Andy may reply further. We’re eager to write more about how Jibo is programmed. But we’re also still in development and want to avoid making statements about functionality that could change. As you can imagine, some code runs locally on Jibo system and some code involves interaction with the cloud. This topic seems well suited to be discussed within the Jibo Developer Community once it is open.

  11. Are you guys going to be putting up some idle animation, so it moves even when you’re not using it? Because adding some life to it like that would be really good, it would make me smile seeing him move around automatically.

    • Hi, we have the greatest robot animator I have ever met working for us. I have no doubt he will make sure Jibo is always up to something. For now it would be a little premature to commit to any specific behavior.

  12. A great many will want to pick up JIBO just out of curiosity. I was wondering if you can develop a fixed base, maybe a simple unobtrusive ring (like those used to mount smoke alarms) that would allow JIBO to be docked. I would envision a simple locking system (½ turn worm gear lock, magnetic latch, etc.) that mates with the bottom of JIBO that would be engaged and disengaged remotely or by a recognized (authorized) speech command given to JIBO from the owner(s).

    Also, I pre-ordered the SDK Edition. Was wondering if it included the basic SDK Toolkit so my daughter could work with it too.

  13. Hi,
    Thanks for taking the time to write this blog posts (and the futur ones) it was really interesting to read.

    Jibo seems to be incredible, the only point that fears me is about privacy and security. I’ve read all that what you’re saying on the FAQ but can you make a post with more in-depth details ? Have you also considered to have a security team, bug bounties and/or some security audit on Jibo’s platform ?

    I think that the opensourcing of Jibo could be a great another choice but I understand the pros and cons for you but I hope you will consider this choice too.

    Thanks a lot for making me dream a little more today !


    • Hi Romain, We hear you loud and clear on the privacy and security. As soon as the time is right we will publish additional details. The security of our systems and the privacy of our customers data is a big portion of my job and I take it very seriously. I hope you can understand why we can’t comment on certain things at this point in the development cycle. We want to be sure that when we do start addressing privacy and security in detail it is technically accurate and represents the latest design. Stay tuned!

  14. Hi Andy and Rich, I am keen to understand how you envision JIBO integrating in to a smart home?

    I have backed Ninja blocks, Ubi, LIFX, Plugaway, Blink, I-BELL, Edyn, Almond+, Korner and bought EmonTX, LG SmartTV…

    I would love JIBO to be in the core of my smart home!

    • Hi Richard, We are getting lots of great comments and questions regarding Jibo’s integration with the smart home. I’m currently reaching out to industry experts. We’ll be looking to find the ideal standards and protocols to allow maximum bang for the buck. Andy and I will be blogging on the subject in the near future. Keep the comments and ideas coming. We’re reading them all and keeping notes!

    • Hi Richard, We are actively looking into the home automation space. As I’m sure you know this is currently a fragmented market with many incompatible protocols. There are some promising trends such as hubs that help bridge different vendor’s products. It’s just too soon for us to make any specific commitment or comment on vendor relationships. Both Andy and I are actively evaluating our options. If you have specific advice feel free to email me. I am collecting input from many sources.

  15. Thanks for starting a dialogue with your up-and-coming Jibo developers. I think Jibo is a fantastic platform with a lot of potential and I can’t wait to see what developers come up with.

    I got the feeling reading through the currently available texts about Jibo that it’s speech capabilities would be limited to certain phrases pre-programmed into Jibo. Is this true, or will there be a text-to-speech capability so that us developers could have Jibo decide what to say programatically? If not, I hope you’ll consider it since it would greatly enhance Jibo’s potential for interaction.

  16. Hi, I’m a professional software developer, and I’m considering ordering the developer edition. I’m very excited to hear that JavaScript will be your application programming language, but I have almost no experience developing software for robotics or low-level device code. Before I plop down $600, I’d like to know that I’m going to be able to write useful software.

    Are you planning on having a high-level API, or a low-level API (or both)? For example, are we going to have to send bytecode to each of the hardware components, or will there be higher-level APIs available such as a simple function call that returns an array of objects for faces detected by the cameras and/or perhaps a function call to begin following/tracking a detected face?

    • @MichaelRod77: Yes, we do plan to include Text-to-Speech (TTS) capability in the product. For our initial demos and key apps, we’ll likely continue to use prerecorded phrases, but plan to support TTS for general speech as well.
      @X3HALOED: As you say, we’ll have a JS API (via the JiboAlive SDK) that gives you more control over Jibo’s “brain” (defining expression output based on sensory input), that’ll support things like returning an array of objects for faces detected. And we’ll have an even “simpler” interface via the JiboAlive Toolkit using web-based tools, where we’ll have a library of expressions (like Jibo giggling, or frowning, or whatever) that you’ll be able to tap into. No firmware or low-level coding required in either instance.
      Hope this helps,

  17. Hi Andy,

    I see that marketplace will include physical products as well as an ‘app store’. Any chance you can release the physical dimensions of the product early so that it might be possible to begin designing customisations for Jibo? thanks!

  18. I have a question about lower-level API access – specifically about the speech capabilities. It would be great if this was going to be open enough to allow developers to change languages etc. Same with other “basic” functionality such as vision. Ultimately, some things are going to require native code apps to really work.

    • Taking into account that JIBO has been an international success, it would greatly speed the localization effort if a chosen number of developers (that is, users who pre-ordered the dev version of JIBO) would be given early access in exchange to help with the implementation of new languages. I’d be eager to help with Latin-american spanish!

  19. Since the comments seem to stop around August 15th, I don’t know if this has been asked. I have been a professional programmer for over 30 years. I am interested in what other capabilities might be available through the SDK on top of the ones you mention being available through the Toolkit (i.e. animated movement, sound, visual, or a spoken phrase). I really want to know if we will be able to access TCP/IP or even Cloud Storage through the SDK for our own programmed skills (client side). I understand the security concern, and I fully expect incoming connections to JIBO to be heavily guarded (for security/privacy concerns), but outgoing connections, JIBO to JIBO, or even some JIBO cloud to external TCP/IP connection really interests me. Can you elaborate on what might be available for those using the SDK?

    • I’d love to know the answer to this as well since it makes a big difference in the type of apps we have the potential to develop. Accessing TCP/IP would mean availability to connect to outside services and would greatly expand Jibo’s capabilities. I’m guessing that most other developers are similar to me in that I already have a large list of possible apps for Jibo but I don’t know which are feasible yet. Can you offer any clues as to what might be allowed through the SDK?

    • Hi Arthur, It’s a little too early in the cycle to give out specific details about SDK calls. We are certainly here listening to feedback and suggestions. I think it is reasonable to assume that you will be able to make web service calls as those will be necessary to develop many skills/applications.

      • Thank you Rich for the reply. That is what I was expecting, and am happy with your answer. As time gets closer towards releasing the SDK, I’m sure I will have more questions for your. I look forward to talking with you and other developers in the Developer Forums once they are up… Again, thank you for your reply (and congrats on the hugely successful funding).

  20. Oh great I have something in mind. This can help me with my daughter she has epilepsy. I can program it to watch her movements and ask questions. I think I can programm it to know when she is having a seizure.

  21. Would it be possible to include some accommodation for a mount? In order to attach Jibo to a fixed surface. In terms of elder care and seniors, they move. It would be nice to have a mobile platform as an accessory (possibly using a tracked chassis) such that Jibo could be attached to it and then control it. In the robot community this is common. Further Jibo as a platform lends itself well to this use. Adding arm articulation could follow eventually, but as that is a more difficult problem, for now at least, adding a mobility platform seems a logical extension to the core of what Jibo adds to a home. As to a port on the back, I would support that both for security and practicality, as I believe the accessory hardware market for Jibo will be much larger than you might think. Everyone to whom I have shown Jibo to wants one, and considers the price a bargain for the skills Jibo ships with. All of the programmers whom I have shown Jibo to, want to write for Jibo (Like right now).

    It is my belief that Jibo will be the iPhone of the Robot world, finally offering a practical affordable robot to the public. As such planning for the hardware accessories to follow just seems prudent.

  22. I am looking forward to using JIBO to control my home. Everything in my home; Lighting, HVAC, Security, appliances, Audio and the Theater; currently use TCP for control. I have been into the “Internet of Things” for over 10 years advancing my homes functionality as the technology progressed. The main UI is currently iPads/iPhone and ProntoPRO devices all of which are coded in Javascript.

    In the theater I would like to use JIBO as a voice interactive control but am concerned about the background noise level (what ever is playing TV, Movie, Music) when the Theater is in use. Is the microphone system single or multiple microphones and are they omnidirectional or focused where JIBO is looking. Keep up the good work and am looking forward to becoming an active contributor/user.

  23. I planned on having mine reside in my office on occasion but fear that unless you design JIBO with a built in physical security feature to lock it onto a solid surface mount, I will never know what wonderful features you programmed it with since I am sure that just from curiosity sake alone, JIBO will see more of the outside world than I would like, as It is whisked away by enthusiastic admirers passing by my office. What if It grows fonder of them than me an doesn’t want to return?

    Seriously, I have commented on this before because I see it as a problem unless you address it in the pre-production stage with a hook or something for third party developers to attach their products to. (I could easily build a lockable, even voice activated, base if it had that one simple feature!
    Please address your plans for this.

  24. Hi, I have been reading the posts – starting in 2014 – and I am curious as to exactly where do I download the SDK – one BLOG specifically mentioned there was a simulator leaving open the possibility to start developing before the bot is actually in one’s possession. Here it is one month into 2016 and it would be nice to actually get my hands on the SDK + simulator. I haven’t seen any specifics regarding achieving that goal.

    • The SDK is available for download and install via GitHub but is currently protected behind a login form restricting usage. The Jibo team has said very recently that access to the SDK will be open to early-adopter developers before the robot comes out in March/April, so you can expect to get an email sometime soon with your login info.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s