Integrated Code https://integratedcode.us Phil Estes, developer @ IBM Mon, 20 Sep 2021 17:53:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.12 75777006 Dealing with Disagreement https://integratedcode.us/2021/09/20/dealing-with-disagreement/ https://integratedcode.us/2021/09/20/dealing-with-disagreement/#respond Mon, 20 Sep 2021 17:53:22 +0000 https://integratedcode.us/?p=685 Let’s admit the facts: as much as we want to believe we are making progress in working together as a more-connected-than-ever global village, we don’t do the “getting along” part very well. The news of the past year seems to constantly remind us of that as if poking at a fresh wound over and over. Arguing over masks, vaccines? Politics? Spreading, debating, or squashing misinformation? The heightened emotions and inherent global tensions of our pandemic era seem to only exacerbate our problems.

Sorry to disappoint but I won’t be positing solutions to this gargantuan challenge. Rather what I want to highlight is this non-profound corollary: tech isn’t immune to these very same relational issues. Nor is our industry immune to the same ill effects happening more broadly thanks to enduring (and then enduring some more) a global pandemic: isolation, frustration, mental health challenges, anxiety, and burnout to name a few.

In the midst of this most of us have attempted to keep moving forward. Some have figured out life in the “home office” (a.k.a. spare bedroom), usually vying for space within an entire family of new online nomads. Many of us have seen opportunity to head in a new direction and started new jobs. Some companies have recognized the need to temper the pace and are providing extra benefits, days off, and mental heath support for employees. And, let’s be honest, most of us are hoping we can at least retain our sanity and livelihood long enough to get back to some “new normal” whatever that may be.

Atop this undercurrent many of us work collaboratively with participants across the industry and, sometimes, across the globe. In the “before times” this collaboration often included face to face time: conferences, meetups, or just ad hoc gatherings where our communities could work and think together. In the absence of that experience—now approaching the two year mark for many—I am not alone in noticing that this has impacted our ability to work well together. We laud advancements in video conferencing, ogle over the latest new home networking infrastructure some have built, we like tweets of fancy new WFH setups, but no technical solution has replaced the critical need we have to see a person as, well, a real flesh and blood person. With that reality of presence comes the ability to treat them accordingly: as a fellow human. Now more often than not our humanity is veiled through the pixelated screen and our voices are replaced by keyboards. The person on the other side of the issue becomes less human, and when opinions diverge we are now directing our words and efforts against “that dumb idea” or “that competing vendor’s agenda.” The human is no longer visible and our behavior can easily follow, now choosing dehumanizing expressions and responses that only fan the flames.

So, what can we do? We’ve known for a long time that relational challenges—people problems—easily trump even the most thorny technical issues when it comes to complexity. Yet we still spend a lot of our focus and effort on solving, well, technical issues. I’m going to suggest that we all take time to think through the following list and make it a priority to do some introspection regarding areas we can personally improve. While it can be easy to read a list and think of all the other people we know who need improvement, the focus needs to be inward first.

Examine your biases. A recent tweet asked people to consider why they accept certain people’s statements at face value and not others. It’s a great test and challenging thought exercise.

Don’t prejudge feedback based on the source. This is potentially a more specific version of the first item. All too often in multi-vendor spaces where the (cringy?) word “coopetition” is used there can be a temptation to assume a person’s viewpoint is driven by vendor politics. If you are working with competitors on a collaborative underpinning (a standard, an open source core project) you need to find a way to take off your vendor hat to discuss differing opinions.

Find where you agree. It can be a great exercise (and tension diffuser) to find some common ground and express it clearly. “I agree with your point about X. Maybe we can start from there.”

Find out where you truly differ. Related to finding where you agree, sometimes the visible disagreement is masking a core disagreement. Try spending enough time to peel the onion to find if there is a more important, core issue to resolve together.

Learn from someone who doesn’t think like you. This can be difficult but rewarding. With all the talk about DEI I think many times we are still missing the real value of diverse insights. Why? Because a lot of our communities have unconsciously enforced group-think or even a lack of awareness that we are not truly open to new voices and ideas.

Ask someone you trust for feedback. It can be very useful to invite someone not “overheated” with emotion or not as invested as yourself in the conflict or disagreement to provide feedback. Give them enough details/references to make their own assessment but be careful to not bias them towards your viewpoint if at all possible.

We all know there is no simple reductive answer to Rodney King’scan’t we all just get along,” but hopefully with some introspection and humility we can improve the way we deal with others with whom we disagree.

]]>
https://integratedcode.us/2021/09/20/dealing-with-disagreement/feed/ 0 685
I’m joining AWS! https://integratedcode.us/2021/01/11/im-joining-aws/ https://integratedcode.us/2021/01/11/im-joining-aws/#respond Mon, 11 Jan 2021 17:26:00 +0000 https://integratedcode.us/?p=698 Apparently coming as a surprise to many, I have made the difficult decision to leave IBM after 26+ years with the company. Today I am joining Amazon Web Services as a Principal Engineer in the container compute team.

First and foremost, I must acknowledge that I had an amazing career journey at IBM. I would not have stayed 26 years if that were not true. I had so many opportunities to do interesting work and so many things fell my way; it was definitely not something I carefully architected and controlled!

I got to work on porting important software, like the IBM JVM, to Linux thanks to being an early Linux user at IBM. Years later I was invited to work on Linux distribution work that led to a role where I grew in both leadership and scope over a number of years, including IBM relationships with distro providers at Red Hat and Canonical. During that time I learned a ton about open source software development and communities. The end of that era led me right into the explosion of Docker and containers where I invested the last 6 years of my IBM career.

Joining AWS, and specifically the container organization under Deepak Singh, is a perfect next step for my career. Given my deep involvement in the CNCF containerd project, my work in the OpenSSF, and my interest in helping at the complex intersection between contributing and adopting open source software and managed cloud services, I’m looking forward to seeing how I can help Amazon, and specifically AWS, continue their journey providing container offerings and using and contributing to open source projects in this space.

I’m excited for this specific Day 1 and all the “day 1s” to follow!

]]>
https://integratedcode.us/2021/01/11/im-joining-aws/feed/ 0 698
25 Years at IBM! https://integratedcode.us/2019/08/14/25-years-at-ibm/ https://integratedcode.us/2019/08/14/25-years-at-ibm/#respond Wed, 14 Aug 2019 14:32:41 +0000 https://integratedcode.us/?p=595 I just recently passed my 25-year anniversary as an IBM employee at the beginning of August. Of course it’s hard to believe I have really been employed as a software developer for twenty-five years, but it’s true! To commemorate this occasion I used Twitter to share “25 memories” of my 25 years for the days leading up to my actual anniversary date. Given many people seemed to enjoy them, and the fact that they can be hard to find and re-assemble in the Twitter UI, I have replicated the twenty-five memories here for posterity. It all started with this tweet on July 7th:

Memory 1: {tweet}

An offer. In June 1994 I was back to mowing lawns in south Florida heat, having papered my bedroom door with rejection letters from every small hardware company I wanted to work for.
IBM had been a great experience as an intern–3 sessions over my 4 years of university working for different software labs in Boca Raton–but I wanted to work in hardware design, with no experience, in 1994 when “software was the future” of everything. But by late June with no other opportunities, I finally took a call from IBM who said “every manager you had at Boca had great things to say about you and we’d like to at least talk about a job offer.” Long story short I agreed to an August 1 start date, working in the OS/2 kernel performance team. I would make more money than I have ever dreamed of, and definitely more than my parents made at the time. (not everyone in America comes from rich parents, even if you are white) As you can expect, I’ve enjoyed many things about IBM to have lasted 25 years, and I’m definitely glad I “gave in” to talk to IBM that summer 25 years ago.

Memory 2: {tweet}

My first real software development role at IBM was on the OpenDoc team after about 9 months as an employee. OpenDoc was a joint project with Apple and our Taligent partnership. I learned a ton on this team about sw development practices.

Byte Magazine special feature guide to OpenDoc

It included hard lessons like negative feedback on my code (I actually accidentally overheard a conversation between the team lead and project architect about one of my first code commits), and it forced me to really learn how to think about code and design. I learned the value of stepping outside the explicit “work assignment” to learn; I wrote a distributed chat app for OS/2 that I used with friends with the DSOM libraries we were using to build OpenDoc. Sure, CORBA was a mess, but it was a fun project and I learned a lot. Fun fact: It was Steve Jobs returning to Apple–long before any glorious future of Macs, iPhones, and iPads–that was the death of OpenDoc.

P.S. At some point pieces of the OpenDoc code base ended up on GitHub; you can see some of my changesets here. Yes, I’ve been “porting” almost my whole career (JVMs, Linux distros, etc.).

Memory 3: {tweet}

Editor’s note: Memory 3 landed on July 9th, the day of the IBM – Red Hat acquisition deal close.

Today’s memory just had to include @RedHat, so let’s talk about Java on Linux…back in 1998 and 1999. I was already an avid Linux user and had Linux installed on a PC in my office at IBM Austin.

Red Hat Linux 5.2 boxed version

We had already been a Sun Java source licensee and ported the JVM to OS/2, and then made a Windows variant with our own JIT and performance improvements. Now someone asked if we should do a Linux port, given our newfound interest in Linux at IBM. Long story short: I was handed the skunkworks project with one other developer (from the JIT team at IBM Tokyo research) to port the existing JVM codebase to Linux. My manager purchased a “real” boxed Red Hat as legal wasn’t sure if my downloaded Linux install was “kosher” :) I had a Sun Ultrasparc on my desk, an AIX workstation under, and my Linux and OS/2 boxes (this was when I got labeled as a hardware hoarder) and effectively did a 3-way merge of the UNIX-like codebases of AIX and Solaris and our x86-based Win & OS/2 ports within a couple weeks. This “2-week sprint” (before anyone talked about sprints) became the basis of our JVM 1.1.8 on Linux that was announced by IBM execs at JavaOne that summer, and beat Blackdown in performance and beat Sun with their own Linux port of 1.2 a bit later.

IBM OTAA Award: JVM development, 1999

It was a big achievement at IBM and won me my first Outstanding Technical Achievement Award in July of 1999 that still hangs on my office wall today.

Learnings: open source licensing and distribution (the AWT linked to libmotif.so on AIX/Solaris, and we ended up distributing a binary of licensed Motif in our JVM package), working across the globe (the Toyko JIT developer and I were in opposite timezones!) And thanks to Red Hat for the distro and development platform that enabled my work and led to a successful IBM JVM on Linux in 1999.

Memory 4: {tweet}

The “I’ve Been Moved” experience—Boca Raton to Austin Texas! Less than one year into my career we got the news IBM would be closing the Boca site and offering to move us to Austin, Texas.

1996 Article on the IBM Boca – Austin Migration.

As newlyweds and with no ownership of anything in south Florida, my wife and I were excited for the new opportunity, although we did look around to see if we wanted to stick on the east coast or try out this crazy city in Texas. In the end we loved our five years in Austin, Texas, although I could have done without the heat! :) The “survey trips” were pretty crazy—full chartered flights of IBMers being treated to the best of Austin for a weekend. Our first halloween on 6th street. Austin is still a thriving IBM location today, and I still enjoy visiting the site and driving around the city to memorable locations—like the first home we owned and where our first two children were born.

P.S. Can you imagine researching a new place to live without Google?? We did have “Yahoo! Directory” and, yes, we had web browsers in 1996. I created a web form internal to IBM for IBMers to ask questions about the move–I think it was backed by a Perl script in cgi-bin 🤓

Memory 5: {tweet}

Recently @DustinKirkland asked people to share “first biz. trip” experiences. My trip to Istanbul, Turkey in 1999 always comes to mind. It might officially be my 2nd; I think I had one uneventful trip right before that to US Bank in MSP. I had built expertise in the JVM (as you might imagine from memory #3 a few days ago) and was getting called to help customers with hard JVM problems. In this case Yapi Kredi Bank in Turkey was working with IBM on a Mozilla-based teller system using Java applications. I know. Equal parts excited and scared as a very young developer, I got called into a VP’s office and was told I needed to go to Turkey immediately and that my main job was to make the customer understand IBM cared and was sending their best to help solve the technical challenges. Craziest part: arriving in a very foreign place; a guy is holding a sign with my name, speaks no English, and then proceeds to drive me for over an hour to the far eastern part of the city to YKB headquarters. No phone/Google maps; nothing to verify I wasn’t just kidnapped. 😂Humbling reminder we have no control and sometimes are very lucky & blessed: 6 days after I return home on August 11th, there is a massive earthquake in Turkey, killing 10s of 1000s and causing considerable damage in Istanbul.

Memory 6: {tweet}

Let’s do a funny one for today, although it might be going back to before I was a full-time employee. Summer interns were “supplemental” IBMers and therefore had to fill out a timesheet–on a mainframe. 🤓

The IBM workday at the time was from 8:00am to 4:42pm. Why 4:42? Because the day was divided into 6 minute increments, and lunch was 7 increments–so 42 minutes. To work an 8 hour day meant working until 4:42pm with 42 minutes for lunch. 😂Being at the Delray Beach offsite campus meant we had no cafeteria in the building and mostly went out to lunch. The local IBM regulars had implemented an (un-official) “ELP”: Extended Lunch Program. By the time we drove somewhere and got lunch it was easily over an hour. But it was the busy days of OS/2 and most people weren’t leaving at 4:42pm–so the hour (and a half?) lunches of the ELP were well within working an 8 hour day. By the time I worked at the main site in later summers, dinner was being provided on-site to deal with the long hours!

Memory 7: {tweet}

Another early business trip (1997?) led me to IBM Hursley, UK, and it’s become one of my favorite sites to visit to this day. I spent two weeks there doing a rotation in our “Java Technology Centre” (JTC) which was based in Hursley.

Hursley House, IBM, outside Winchester, UK

While most IBMers do not work in Hursley House (the beautiful 18th century mansion), the grounds and setting, just over an hour SW of London, are peaceful and beautiful. I still enjoy going to Hursley today, and it reminds me of my 2 great weeks there in my first big project.
To local IBM’ers it might just be a place to work; visitors don’t experience “real life” in any place they visit short term, but I’ll always jump at a chance to visit the site. It’s an easy train ride from London or rental car from LHR! More info on Hursley House from Wikipedia. As a bonus, Winchester, the nearest city to Hursley, has a nice green and cathedral and walks along the River Itchen are really beautiful.

Memory 8: {tweet}

A lot of IBM memories involve people. At the risk of leaving some great people out I’m going to highlight a few special friends and leaders who have been huge in my career. First is Alex @tarpinian, a colleague, manager, friend, and mentor. Alex gave me my first significant promotion around 2000, to “Advisory Software Engineer”; he had already been my manager on a project several years prior and was now managing me again. Alex was a great manager and friend who gave us all opportunities for growth and promotion. After almost 25 years we still work in adjacent areas and still get to interact now and then, and I see Alex as a person I have turned to many times across the years for advice and opinions, and as a sounding board regarding decisions I’m making. He graciously allowed me to seek out an opportunity back on the East Coast in 2001 when I told him that as a family decision we wanted to be closer to our extended relatives—he could have made it much harder, and yet he helped make it possible at a loss to his team. Thanks Alex for all your help and support across the years!

Memory 9: {tweet}

Thanks to the prodding of another early manager and friend, @keithfinley, I decided to apply for the Master’s program at @CockrellSchool of Engineering at @UTAustin in 1996. I worked full time and joined the “executive” program for 2 years.

December 1998: UT Austin Engineering School graduation.

I always say my Master’s was a great value not because I learned specific technologies or software engineering methods, but because it forced me to grow my presentation and communication skills, as well as the complexities of team-based exercises, which were extremely common.
It also gave me a fascination with compiler technology and ASTs due to a specific class sequence, and I ended up writing a Master’s thesis on the static analysis of Java code, which included an implementation. Yes I used TeX to typeset my thesis. No I don’t have the source :(

Master’s Report; UT Austin

It was definitely a privilege to receive a Master’s degree, and IBM has changed in that I don’t think this benefit exists today, but I’m really glad I was able to take advantage of the possibility at the time, and I know it has positively impacted my career through the years.
So within 3 weeks in late 1998 I became a dad for the first time, and received my Master’s Degree. 😎 Bonus: I became a Longhorn for life. Hook’em horns!🤘

Memory 10: {tweet}

I spent the largest portion of my career in IBM’s Linux Technology Center or LTC for short. From 2003 to 2013 I had the privilege of taking my love for, and use of, Linux and actively working with Linux distributions for my day job.

Dan Frye at the LF Collaboration Summit in 2010.

Dan Frye, LTC founder & exec, pictured here, led the group for many years and I’m privileged to count him as a mentor and friend. The LTC represented a transitional moment in IBM history–while open source wasn’t new to IBM, commitment & investment in OSS dramatically increased. Some LTCers have moved on to other roles outside IBM in later years of the group, but we had an amazing group of Linux devs. I don’t think I can even list all the awesome people I worked with: @DustinKirkland, @anliguori, Scott Moser, Serge Hallyn, Paul McKenney, and many more! I had some great mentors and managers during those years: Randy Kalmeta, Barb Wang, Joanne Guariglia, Kathy Bennett (took over VP, LTC from Dan Frye), Sheila Harnett, Steve Best, among others. My promotions to Senior SW Engineer and STSM came during my LTC era.
More info on the LTC from the early days (founded August 1999) from IDG’s IT World, developerWorks and Wikipedia.

Memory 11: {tweet}

I hope every IBMer has a chance to visit an @IBMResearch site during their career. My first opportunity came in 2000 when I visited IBM Almaden, perched on a hill near San Jose. I was working with a team of researchers on a tuple-space implementation.

IBM Almaden, San Jose, California

There is something impressive about our research sites—a history of inventions across so many disciplines, and ongoing research in all kinds of interesting fields. Where else do you see real lab equipment in an IBM office building!? I remember walking past T.V. Raman’s office–the creator of emacsspeak, as he was at Almaden during those years. His computer was reading his screen to him at an amazing rate! Years later I would also get to visit the T.J. Watson Research Center in Westchester County, NY, and have been many times since. With the advances in Quantum Computing and the creation of @qiskit coming out of Watson, it’s an exciting place to visit as well.

IBM Thomas J. Watson Research Center, Yorktown Heights, New York

With the commemoration of the Apollo mission 50 years ago, it’s amazing to work for a company who was deeply involved in placing people on the moon, and are today working on everything from Kubernetes to Quantum computing, powered by significant research communities worldwide.

Memory 12: {tweet}

Working with IBM’s Linux distribution partners and Linux distro packaging within the LTC for a decade, I had one interesting “side job” added to my career activities in the 2000s: handling Linux use due diligence for acquisitions! Sometimes I wasn’t part of the official due diligence team; other times my team merely handled remediation of distro sourcing and usage post-close, but getting to actively participate in M&A activity was a learning experience and fun diversion from my usual tasks. Anyone worth their salt in the 2000s had a Linux-based appliance (especially storage and network acquisitions) so we had plenty to do during that decade! Ones I can remember participating in: Storwize, Diligent, Netezza, XIV, Guardium, Cognos, Datapower, Internet Security Systems (ISS), FilesX, BLADE Network Technologies and Nitix Server. So many interesting Linux “hacks” and configurations across that group, including interesting CPU/arch combinations in some cases (for management devices..MIPS, SH-4, ARM, etc.). It was a fun exercise to see how each was assembling, using, and managing Linux for their products. While most of those experiences are almost 10 years old now, it gave me a depth of understanding in the merger/acquisition process I never would have known, and many friends across a long list of acquired companies that I still talk to today.

Memory 13: {tweet}

Sometimes it pays to have a really old email archive :) I was trying to remember my first open source contribution. Even though I worked on Linux in the LTC, I wasn’t really tasked with upstream development as a main part of my role. But, in the matter of course of building and laying down distro images, every once in awhile I found something worth submitting upstream. In this case it was a minor issue with my particular partition mount setup and the time zone tools packaged with libc. This is from November 2005:

Bug report submitted to the glibc project. November 2005.

I attached a suggested patch to the bug report. You can read volumes online about people clashing with Ulrich Drepper (sort of like the Linus for glibc) over the years, but I got this very anticlimactic response; and with zero fanfare my first real upstream patch was merged. 🎉

Ulrich Drepper response. December 2005.

Memory 14: {tweet}

Our trusty Trek Hybrid bikes in 1995.

In my first year at IBM my wife started working at IBM also. After getting married and settling into an apt. in Delray Beach, we went out and bought brand new Trek hybrid bikes. Soon after we decided it would be fun to ride to work together.

Us in 1995 – Fashion! :)

It was winter in Florida, so we weren’t sweating to death, and it was kind of fun to have a leisurely ride to and from IBM each day. One morning, however, our handlebars got a bit tangled while riding side by side (and maybe me doing something silly 😇) and we both went down!
We were close enough to IBM to finish up our ride, but had some cuts and abrasions that were bleeding pretty well by the time we arrived! We must have looked hilarious walking into work like that. Thankfully IBM still had a medical office on site and we got patched up easily.
The good news is we are still riding together after all these years! We’ve done everything from sprint triathlons to a huge 2018 trip from Munich to Venice by bike. We even rode 13 miles around our local community this morning. 🚴🚴🥇❤🚲

Munich to Venice: Summer 2018

Memory 15: {tweet}

Regular international travel has only been a recent part of my career experience, but working “around the world” has been something I’ve experienced regularly, given IBM has employees in almost every corner of the world. The richness of getting to work alongside IBMers from many cultures has been an awesome part of these 25 years, and the more recent possibility to see them face to face has only heightened the reward of working globally. Having someone proudly show you their city, sharing a traditional meal with a colleague, or simply finding out the “sameness” of life experience even across continents through conversation has been a rich addition to work relationships over the years. I’m also thankful for the opportunity to see life outside of my American-centric experience, learning about other cultures, traditions, and life experiences. Expansion of my “work relationships” to open source communities has added even more opportunity for this recently! Here’s hoping for more opportunities to eat, talk, and enjoy work and life with people from every corner of the globe! 🌎

Memory 16: {tweet}

It’s a Monday. By simple calculations, I’ve probably spent 1,100 Mondays at work since 1994. And Mondays are a good time to remember that work isn’t always exciting, thrilling, or even fulfilling every moment of every day. For whatever reason Monday gets a bad rap, it’s a good day to remember that many days out of 25 years have no highlight or notable accomplishment, or were even enjoyable in some way. Work is like that, and there were times I’m sure I wasted time I could have spent meaningfully. But, slogging through slow days; deciding to attack a random problem just to dispel boredom with the day-to-day; or simply going to your lead or manager and asking for something to do is all part of a long-term career in any field, I assume. I’m pretty sure I learned some meaningful lessons in the “non-notable” days, weeks, and months over 25 years. I definitely learned to press on, even through some not-so-fun assignments, and they usually led to new opportunities or at least a growth experience. So, happy Monday out there. Keep slogging along and reach out for help if you need to; better times are ahead, even if it means stepping out of your comfort zone and finding something new to do!

Memory 17: {tweet}

Another fun side project early on: someone built a PE (binary) loader for the OS/2 kernel, and we were thinking we might be able to run Windows binaries on OS/2 if we could map the Win32 syscalls to OS/2 libraries. I was between my OpenDoc assignment and the formation of the Java porting team, and was assigned to help @gfxman & @mikebrow with this experiment. My test binary was the Borland C/C++ compiler for Windows. I was basically working through the core Win32 calls and DLLs one by one, mapping them up to OS/2 calls and then trapping which ones were still being called and needed wired up. I was so close to having the compiler actually working before the experiment ended. As an aside, did you know that as late as Windows NT you could still run OS/2 binaries natively on NT? The base OS/2 DLLs were part of the NT install, and character mode programs could run successfully, IIRC. This short exercise didn’t lead to much, but it was a great chance for me to learn the intricacies of object linking/loading at a low level, which would come in handy later when I wanted to dig into ELF on Linux and understand libc, ld.so, & cross-toolchains.

P.S. Kelvin (@gfxman) was a great early mentor in my career and someone I always looked up to. And who knew I would be working with @mikebrow again 20 years later on @containerd and OCI container stuff!

Memory 18: {tweet}

My first conference talk was in November 2000. I was asked to represent the tuplespaces work (from IBM Almaden–a prior memory) at a historically OS/2 conference that had become Colorado Software Summit by this era. I had never spoken in front of more than a few people in my department, or during presentations during my master’s program by this time. I had all the experiences you would expect a new speaker to have–dry mouth, talking super-fast, forgetting half of what I wanted to say. 🤣 I’m pretty sure it was a train wreck, and I actually had to give it twice (it was a conference expectation at CSS)! But everyone has to start somewhere, and that was my humble beginnings to public speaking. Sorry if you were there! 😇 I hope I’ve improved since then! I had a long break since my LTC role didn’t provide opportunity for external speaking, but I had many opportunities to speak and present regularly to internal audiences that helped prepare me for re-entering the conference world in 2014. Since then I’ve had at least 50+ opportunities to speak the last 5 years, and every experience teaches me something. I also try and learn from great speakers like @lizrice, @kelseyhightower, @bcantrill and others; all the while realizing in many ways I’ll still be “me.” If you get the chance to speak and are interested, take it! I’ve seen so many amazing first-time speakers in the last several years. Many of them thought they couldn’t do it, or wouldn’t be any good. You won’t know until you try! Oh, did I mention it was a family affair? At Keystone Resort at the beginning of winter. Fun times–it started on DST weekend, so our oldest got up at 4am the first day if I remember correctly. 😅 I also ended up with a major cold/sinus infection out of the trip, but hey.

Our little family in Keystone, Colorado in November 2000.

Memory 19: {tweet}

I still clearly remember the first day I visited the @Docker SF office on Sansome in 2014. I even remember waiting at a Starbucks around the corner for @jrmcgee and @cloudtroll so we could walk over together. There was a couple purposes for meeting, and mine was finally meet these crazy open source maintainers at Docker I’d been hanging out with for a few months online on IRC and in the Docker project on GitHub. It had that first day of school awkwardness at first 😂; there was @jessfraz & @crosbymichael, @LK4D4math & @vieux; all people I knew online but had never seen in person before. I think we went out to lunch, and I met a few others from the team, and saw @jpetazzo in the lobby. This group of people, as well as others who came along later (@icecrime, @derekmcgowan, @cpuguy83, @stevvooe) were all instrumental in greatly improving my Go skills (I was new to Go in 2014), bringing me on as a maintainer, and helping me be more effective as a contributor. More importantly, the friendships that grew out of that maintainer group I expect will last a very, very long time. As @jessfraz tweeted earlier, there are special teams that you will always remember, and this is one of them! So thankful I still get to work with a few of them!

Memory 20: {tweet}

Becoming a remote employee. In 2006 I moved home, and more specifically, geographically distant from any IBM office. I’ve just passed spending half my 25 year career working out of my house. 12 years in traditional offices, 13 at home. Thanks to the flexibility of the LTC organization at the time I moved home, there was no pushback at all. It was a personal and family decision, and I had the full support of my management. After 13 years at home, I can say I’ve acclimated pretty well to this mode of work. I’m sure it isn’t for anyone, and there are definitely some downsides, but IBM as a whole works globally in so many cases now that there is never one perfect spot for someone in my role to be located. Most of my calls include people from at least 3 or more timezones every day. At this point I would not want to give up the flexibility for home/life “interruptions” with a busy family, as well as the efficiency of no transit/commute and the ability to focus well when I need to be totally disconnected. It also has made me immune to all the changes in office trends over the past decade; I have the setup I want with plenty of light, a door I can close, and a stocked kitchen that hopefully rivals most startups. 🤣Working at home has made travel more worthwhile because total disconnection from face-to-face communication is not healthy long-term. I like the cadence of “busy week on the road with friends/colleagues/open source people” followed by “quiet week getting things done at home.” When I hit my 10 year anniversary of being in my home office I wrote up my thoughts then. You can check it out on my personal blog.

Memory 21: {tweet}

People. So many people over 25 years have had some form of influence on my career and I could never list them all. However, there are very few people I’ve known since the first day I walked into IBM (as a college student intern) until today. Mary Jane Delaurentis is one of those few people; she was my manager in OS/2 System Test during one of my early internships (probably Summer 1991). She was a good manager and mentor to her employees. She was kind, but also tough, and pushed me to succeed at whatever I did. We didn’t interact as much years later; I believe she ended up working in Software Group HQ in a strategy role, and later retired from IBM. But we connected on Facebook and for years now she has continued to cheer on my successes and follow our family through many stages. Whenever I think of the best managers and mentors in my life, I always think of Mary Jane Delaurentis. She’s not on Twitter, but she’s on Facebook quilting away in her retirement and enjoying life, and I love that we get to still interact in each other’s lives. Thanks Mary Jane!

Memory 22: {tweet}

It’s a Sunday and we just started a short family getaway–so memory 22 almost didn’t get posted on the 22nd day 😅 But it reminds me that my career has had some tough deadlines, some long hours, but rarely any extended period of imbalance. Rather, I’ve felt very comfortable for 25 years using most or all of my vacation time and rarely if ever needing to work on weekends. In fact, if I do, it’s usually because I can’t let something go or have an interest to finish something; not due to actual management pressure. I’m thankful that IBM has a focus on work/life balance, and every manager of mine has been extremely sensitive to family needs and special situations that required more flexibility in work hours. I know not every corner of IBM is like this, but that’s been my experience. Hope you all had a great weekend!

Memory 23: {tweet}

Most of my career has been behind the scenes; operating system plumbing; JVM plumbing; Linux plumbing and images: 10+ years where my “customers” were IBM product teams and newly acquired companies. No conferences, no external blogs, nothing. You can imagine how significantly my career changed when I was asked to get involved with this “new Docker thing” in Summer 2014. Everything was external; everything was open source; all my work was public, and it was on a platform that was being used by developers everywhere! It’s been a very new twist to my career, but an insanely fun experience to work a majority of my time external to IBM: with communities, foundations; with people from around the world working at various startups, universities, tech giants, and everything in-between.

Community members scene in the 2015 DockerCon SF opening video.

Maybe the wildest experience of this season of my career was to be a character in the opening cartoon to the 2015 DockerCon in SF—which highlighted the growing open source community around the Docker project. The 2 coffee cups, my bike helmet, the IBM t-shirt–it was perfect! To have it reprised again a few years later when @jrmcgee gave an IBM keynote at the DockerCon in Copenhagen was fun as by then I was a Docker Captain and many of them had never seen the original cartoon.

Jason McGee keynote at DockerCon EU Copenhagen.

So, I’m not sure I’ll ever be in another cartoon, but it’s been a wild ride from “behind-the-scenes guy” to an externally active open source maintainer and speaker. I truly enjoy the friendships I’ve made worldwide, and all the places I’ve been able to visit thanks to this role. As much I as love helping others at IBM get involved in open source and become externally active now, I’m afraid my answer to “how did you do it?” was “I was asked to go figure out this Docker thing in 2014, and it just all happened!” Here’s the link to the complete 2015 DockerCon opening cartoon.

Memory 24: {tweet}

It’s starting to hit me how long 25 “tech” years are; just how massive the changes have been. No one was thinking about smartphones 25 years ago. No one had a “macbook” let alone a laptop. Devs were longing for 8MB RAM–M. 486DX was cool. For a fun memory lane experiment, I tried to think of all the environments I developed in–of course OS/2; Windows for awhile; Linux for a long time. I had Solaris and AIX “desktop” machines for awhile. I used VM (s390x) and installed Linux on mainframes using “IP CMS” commands. I’ve done 3 tier webapps with XML in DB2 with XSLT, Java applets, and Java backends. I’ve written 1000s of lines of automation in Perl and bash, C and C++ code for Linux, OS/2, and Windows. CSS, HTML, SQL, and even Lisp (Master’s Degree classes); M4 (macro), Make, x86 ASM.. Golang has dominated the last 5 years of my career, including working with Linux at the syscall level. It’s been quite a journey, and the great thing is that there is always something more to learn. I hope I keep learning and adding to my personal list; any recommendations?

Memory 25: {tweet}

By the calendar, today marks the last day of 25 complete years at IBM, making tomorrow day one of year 26. How and why did I stay so long at one company? That’s a good question, and not necessarily a question with a simple answer. Several years ago when I started interacting with people at startups I felt self-conscious for having to answer “20 years” for how long I had been at IBM. Most people were in year one or two at a company, and talking about what was next, or how they were looking at a change. Five years later, I’ve come to terms with the “lifer” jokes, and realized that I’ve stayed because of the opportunities to grow my career even at 25 years. I’ve thought about leaving; I’ve had a few offers, and I sometimes wonder what a career of company hopping would be like.
But I’m now an IBM executive (as a Distinguished Engineer) and have more potential to have real impact inside and outside IBM in my current open source-focused role, with all the benefits of a 25 year-old network and backing of management and leaders. I’m well taken care of, have great benefits, and work with great people. I can’t promise I’ll never leave, but the reasons I’ve stayed become clear when I sit and think about it. So, here’s to reaching 25 years, and looking forward to waking up tomorrow and starting year 26! 🎉

Celebrating 25!

August 1st, 2019 was my exact 25th anniversary date, since it was August 1st, 1994 when I first walked in to the lobby at IBM Boca Raton to start my first day as a full employee. On my anniversary day I was enjoying some vacation with the family to celebrate:

If you like Twitter’s “moment” feature you can also see all of these memories as a Twitter moment:

]]>
https://integratedcode.us/2019/08/14/25-years-at-ibm/feed/ 0 595
containerd Graduates in the CNCF! https://integratedcode.us/2019/02/28/containerd-graduates-in-the-cncf/ https://integratedcode.us/2019/02/28/containerd-graduates-in-the-cncf/#respond Thu, 28 Feb 2019 16:35:00 +0000 https://integratedcode.us/?p=567 Today is a big day for the CNCF containerd open source project. Today we become the fifth CNCF project to reach graduated status! For completeness, the existing graduated projects are Kubernetes, CoreDNS, Prometheus, and Envoy; you can see the full list of CNCF projects and maturity status here.

There will be plenty of news on today’s graduation, so rather than write a long post myself I’ll update the following list with the official press release, blog posts, and media outlet items as they appear:

But maybe you ended up here and are wondering “what is this containerd thing anyway, and why should I care?” Thankfully, the maintainers and contributors to containerd have generated a lot of content on this topic over the past few years and I will highlight a few here that may help you get up to speed:

A few final thoughts from me in closing—as someone who has been involved since the first hallway discussions about the need for a smaller, less opinionated core container runtime than Docker for the industry. Today represents a significant milestone in the discussion that started with some unhealthy rumors and rumblings about “forking Docker” years ago. I have shown the following slide in several containerd talks to try and represent the flurry of calls for something “boring” to sit underneath higher layers of the stack, including both Docker and Kubernetes, but envisioning much more than simply those use cases:

The calls for “boring” container infrastructure circa 2016

I think where we are today in the containerd project, with a clear and useful client API and specific features like the v2 shim—now used by Kata Containers, AWS Firecracker, and supporting OCI runc equivalents like gVisor and IBM Research’s Nabla project, is really amazing. It has become a really impressive base layer that I believe will be the underpinnings of a lot of container innovation for years to come. Congrats to my fellow maintainers, reviewers, contributors and all those who have tested, reported bugs, worked with us and made today a really special day in the timeline of containerd!

Resources

IBM Developer Video: Thoughts from me on the graduation of containerd
]]>
https://integratedcode.us/2019/02/28/containerd-graduates-in-the-cncf/feed/ 0 567
Why I love containerd…and Docker! https://integratedcode.us/2018/10/17/why-i-love-containerd-and-docker/ https://integratedcode.us/2018/10/17/why-i-love-containerd-and-docker/#comments Wed, 17 Oct 2018 16:34:36 +0000 https://integratedcode.us/?p=545 I talk a lot about containerd. I write blog posts about it, speak at conferences about it, give introductory presentations internally at IBM about it and tweet (maybe too much) about it. Due to my role at IBM, I’ve helped IBM’s public cloud Kubernetes service, IKSstart a migration to use containerd as the CRI runtime in recent releases and similarly helped IBM Cloud Private (our on-premises cloud offering) offer containerd as a tech preview in the past two releases. Given that backdrop of activity and the communities I participate in, I obviously hear a lot of chatter about replacing Docker with {fill in the blank}. Given my containerd resume, you might assume that I always think replacing Docker is the right step for anyone working with container runtimes.

Replace Docker!? or “Choose The Right Tool For The Job”

Maybe due to historic frustrations and/or differences of opinion across the container runtime space, some have failed to see that picking the right tool for the job is just as valuable in this context as it is in any other. There have definitely been “party lines” drawn in some circles based on vendor-affiliation, or some basing decisions off the latest arguments on HackerNews. But, let’s ignore that (which, I’ll admit, is good advice generally!) and look at what we are talking about when we compare the Docker toolset to any of rkt, cri-o, containerd, or any other runtime alternative.

Comparing features across containerd, Docker engine, and Docker Desktop; thanks to Bret Fisher for the idea

This graph shows a sampling of features you might expect or want from a container platform. You can see clearly that containerd contains only a fraction of the capabilities of either the Docker Desktop stack or the pure Docker engine itself. As a [potentially poor] analogy this is like saying I can take away the IDE that my development team uses and provide them with /usr/bin/gcc as a drop-in replacement. You might ask: “well then why use containerd?” Because operationally, containerd makes perfect sense as an implementer of the CRI API from Kubernetes, and as a lower layer life-cycle manager under the feature-rich Docker offerings shown above.

Docker is More Than a Container Runtime

To take this same comparison, but look through the lens of open source, let’s take a look at the number of open source projects that are involved in a Docker Desktop installation on macOS:

  • opencontainers/runc (OCI)
  • containerd/containerd (CNCF)
  • moby/moby
  • docker/cli
  • moby/buildkit
  • linuxkit/linuxkit
  • docker/compose
  • kubernetes/kubernetes (CNCF)
  • docker/libnetwork
  • containernetworking/cni (CNCF)
  • docker/machine
  • theupdateframework/notary (CNCF)
  • …and more (VPNKit, DataKit, etc.)

Docker, the company, has taken these components–many of them written and maintained over the years by Docker employees–and have created a self-contained developer environment in a freely available product with everything built-in and working out of the box! Yes, you can replace this with your own created stack, utilizing much of the same open source. This will be hard work, but is fully possible. You may also find other answers for some of your needs, and simply use a container runtime if you have other infrastructure to handle the rest of this cloud native stack. This is an amazing offering, supported and updated regularly for free by the same team that brought you the addictively simple and powerful docker workflow over four years ago. I use it on a daily basis because it just works. And based on Docker’s statistical information, this free product is being used by millions of developers worldwide today.

Docker in the Enterprise

Beyond this desktop stack, Docker has continued to develop an enterprise product that, again, combines a significant number of open source projects, including their own, as a commercial offering. As is normal for any revenue-generating business that has grown up from an initial open source project, Docker, Inc. has had to step through the usual land mines of open source vs. product delineation, and in my opinion have done a good job of maintaining a strong commitment to open source. The Docker engine core is still publicly managed as an open source project on GitHub, and the output of that effort (the community edition) has remained free and without any entanglements to require being a customer of Docker, the company. Those of us who have been around open source for the last decade plus have seen much, much worse. In contrast, Docker has in my opinion dealt with the complex situation of product, revenue pressures, and open source in a reasoned way, introducing the Moby project and many “Kits” along the way to spin out innovation beyond their walls. Some of these–like LinuxKit and BuildKit–have caught the imagination of many well outside the usual Docker circles, and have proven useful to both Docker’s products as well as the open source community at large. In very recent news, Docker has some bragging rights around this enterprise platform as Forrester Research has named them the leader in the enterprise container platform space. While you can bet we at IBM are working to improve and compete with our platform offering here (and we can expect the same from Red Hat, Pivotal, and others) today is a day to recognize Docker has had early success building a competitive platform atop the open source underpinnings.

It’s Docker AND Containerd

So why take the time to write out these thoughts? For one, I want to clarify for my readers the “why” of my current operational focus on containerd. I want containerd to be the best possible core, secure, and stable container runtime for both Docker’s stack, the Kubernetes community, and many additional projects which are finding value in our containerd API and codebase. Secondly, for those trying to understand the choices they have in the container runtime space, I think it is important for people to think through sometimes emotionally-driven responses that ignore the reality of what it means to switch to an alternative.

Finally, I’m about to kick off a series here on my personal blog giving practical migration steps when switching from Docker to containerd as the CRI runtime underneath Kubernetes. As IBM Cloud, GKE, and potentially more public managed Kubernetes offerings switch the CRI-enabled runtime from Docker to containerd, there are a set of learnings I would like to share to help vendors and users through this transition. As I do that, it may appear to some that this is a “Docker versus containerd” discussion rather than a “Docker and containerd” discussion. Hopefully from this post you can see my perspective on these issues, and the fact that I find it totally reasonable to love containerd…and Docker!

Postscript

If you want to talk more about this topic in person, and are attending Open Source Summit next week in Edinburgh, Scotland (October 2018) check out the Docker Edinburgh meetup on Monday evening where I will be talking about this specifically, or catch my talk (also on Monday) at the conference and catch me afterward.

Thanks to draft reviewers who made this post 100x better: Bret Fisher, Laura Frank Tacho, and Jenny Burcio

 

 

 

]]>
https://integratedcode.us/2018/10/17/why-i-love-containerd-and-docker/feed/ 4 545
3 Things Speakers Want to Hear From You https://integratedcode.us/2018/03/29/3-things-speakers-want-to-hear-from-you/ https://integratedcode.us/2018/03/29/3-things-speakers-want-to-hear-from-you/#respond Thu, 29 Mar 2018 15:39:32 +0000 https://integratedcode.us/?p=522

So you’re at yet another tech conference and dozens of speakers are giving talks throughout the week. Many of them seem unfazed by the fact they are speaking in front of anywhere from fifty to a few hundred, or even a few thousand, people! First of all, let’s deal with this misunderstanding about the mental state of conference speakers, even very seasoned ones, especially right before the talk begins. I asked for some feedback on Twitter this past week; check out these responses from @QuinnyPig, @shapr, and @Azumanga:

Anyway, I digress, but as your favorite conference speaker now breathes a sigh of relief as they finish their allotted few minutes of stage time, what do you think they want to hear as feedback? Many of them have spent many hours in painstaking preparation, and if they added live coding or demos to the talk, the anxiety level is even higher!

So, with some input from fellow speakers I’ve come up with three main responses that your average conference or meetup speaker wants to receive from you, the audience.

1. Your Talk Helped Me Learn About …

The majority of conference speakers are actually doing these talks as a labor of love to share knowledge and expertise that they have gained in a specific technology or concept. They would love to hear that it actually worked. What’s even cooler is that maybe you learned about something that wasn’t even the main intention of the talk. The speaker would probably love to hear that there were unintended positive side effects of the material they prepared. So, be specific. “Great talk!” is nice to hear, but hearing a full sentence about the fact we’ve demystified a specific concept or that you now understand technology X much better would be awesome to hear!

2. Ask useful questions during the Q&A

The Q&A time, contrary to popular thought (!!), is actually for the audience to ask questions, not for the audience to contribute their own ideas on the topic. It would be awesome if you asked insightful questions during this time that draws out the speaker more. Usually they are more relaxed during Q&A as the main portion of the talk is complete, and they can have some interaction with fellow technologists who may be on the same journey they have been on with the subject matter. Asking useful questions that are on-topic with the presentation shows the speaker that the audience was connecting with the material and want to dig deeper or get clarity on a particular point.

3. Thank You!

It’s a simple thing, but given what we know about the significant time investment of each speaker, and in many cases the courage to step onto the stage, it is awesome to hear “thank you” from conference attendees. Thankfulness can go a long way in helping a discouraged speaker–one who is well aware of their own mistakes and the part of the talk that didn’t go as planned–recover and realize that it was useful for them to step up and give the talk even with all the imperfections. All of us tend to be extremely hard on ourselves (ref: imposter syndrome) and sometimes it takes a sea of thank you’s to overcome the overly critical voice in our own head.

In Closing…

A couple “don’ts” to close this post out:

  • Unless you are a close friend of the speaker (or maybe the well-known expert on the topic they spoke about) now is probably not the time to give a laundry list of constructive criticism and feedback on the content. Most speakers have people available to them to provide the necessary critical feedback on content, approach, style, and so on. It will most likely not be easy to take that kind of feedback from a total stranger, so hold it for now. If you must share some corrections to content, knowing that most speakers want to present the correct information, I would suggest sending an email, Twitter DM, or a message via whatever contact method the speaker provided the audience.
  • While a speaker may seem totally comfortable on stage speaking to hundreds or thousands, remember that people are human, and weariness of crowds, social interactions, and so on may mean that the speaker would strongly prefer not to hang around and talk with you for the next hour! Give a speaker space to do whatever is best for them to “recover” from conference and/or speaker overload. Ask them politely for some time if you really want to discuss the topic further, but understand they may not be up for it today. If so, find another contact method, and take it graciously and back off if you are politely told no. All the usual code of conduct guidelines are in effect with speakers just as much as they are with other conference participants!

And thanks to Kelsey Hightower, the cloud native quintessential speaker, for some perfectly timed tweets this past week without even knowing I was writing up this post! Go out and thank a speaker today. It could make their day.

P.S. And if you think speakers don’t have a load of “#fail” stories, check out this post and related tweet thread on the topic of speaker fails. Thanks to Vuong Pham for pointing it out to me on Twitter!

]]>
https://integratedcode.us/2018/03/29/3-things-speakers-want-to-hear-from-you/feed/ 0 522
“How Linux Became My Job”: Extended Cut, Geeks Edition https://integratedcode.us/2018/02/21/how-linux-became-my-job-extended-cut-geeks-version/ https://integratedcode.us/2018/02/21/how-linux-became-my-job-extended-cut-geeks-version/#comments Wed, 21 Feb 2018 21:53:40 +0000 https://integratedcode.us/?p=505 Recently the opensource.com editors made an open call for people to submit their own “open source story.”

I thought it would be a fun trip down memory lane, so I put a draft together and submitted it. Thanks to the great editors at opensource.com, that story is now published and live: “How Linux Became My Job.” To keep my story from being a rambling mess of overly technical details, I had to leave out a lot of the intricacies, and thanks to the awesome editing from the opensource.com team, I was very happy with the clean and readable finished article.

But sometimes these extra details are the fun tidbits for us geeks to devour, and sometimes we are OK drowning in the minor (and likely unimportant!) details. Given the main story is now published, I thought it might be fun to add this “geeks extended cut” here on my own blog with a few of these details.

OS/2, the PS/2 and Boca Raton, FL

I started my IBM career in Boca Raton, Florida, which happens to be the birthplace of the IBM PC. While manufacturing had long moved away from Boca, the old manufacturing buildings had been repurposed into office space, and it was in that set of buildings that I worked with hundreds of other IBMers (and a few Microsoft contractors, if you can believe it) on OS/2 for my first few years at IBM. We were testing our operating system on a myriad of PS/2 models filling large raised floor test labs. 20MB hard drives and the new 486 processor were the exciting new developments in those days!

This was before laptops, so if you wanted a “home terminal” for checking email or doing work at home, IBM had the P70 with an amber screen, diskette drive, and 386 processor that I had in my apartment for awhile, dialing up via modem to IBM! Original list price for the P70? Just under $5000 dollars!

IBM P70 “portable” computer

I ended up not working directly on OS/2 for very long, as a new team was formed to work with Apple on OpenDoc, an OLE competitor that married some Apple technology with IBM’s object oriented library, SOM (System Object Model), and it’s OO RPC (CORBA-based) associate, DSOM. It was my time on OpenDoc (though short-lived given Steve Jobs killed OpenDoc on his return to Apple) where I really started learning what it meant to be a developer. I started to really enjoy debugging, and I was asked to spend a considerable amount of time debugging and fixing problems in the storage layer (Bento) of OpenDoc that was extremely buggy at the time. Let’s just say I found out all too well how painful manual reference counting that controls memory allocation/deallocation can be!

I also started to experiment with small software projects “just for fun”, and decided it would be interesting to write a “chat” tool on OS/2 that would use DSOM to share the messages (as objects) with any participating client that would join the DSOM server. It was horribly buggy because DSOM was unstable and buggy, but I had a father-in-law, brother-in-law, and my wife all at IBM sites at the time and we were able to use it for simple communication throughout the day. In hindsight that was my opportunity to beat AIM to the punch and create my own startup! Given this was Florida and I had an interest in weather, I also wrote a hurricane tracking map application using publicly available NOAA weather data.

IBM’s Java Technology Centre (JTC)

IBM joined up as a full Sun Java source code licensee in the JDK1.0 timeframe. A “JTC” organization was created, and a small group of us, now located in Austin, Texas, were asked to be an extension of the JTC (located in Hursley, England) to handle the porting of the JVM’s GUI implementation to OS/2 presentation manager. This work would enable the AWT (“advanced windowing toolkit”) classes in Java for OS/2, using the official Java sources Windows GUI implementation as a starting point.

At this point I had became a Linux user and hoarded enough hardware to have a Linux box in my office at IBM. It was an “exciting” time to use Linux inside IBM, as the network technology was still token ring based, not Ethernet. The token ring driver for Linux was under constant development to keep up with specific PCI card revisions, models, and ring implementations. Any distribution updates usually meant making sure the ibmtr or olympic drivers were available for a new kernel revision, and you had to build it manually and, if necessary, fix any issues so you could be on the network!

During these JVM porting days, my manager and I convinced IBM that I needed real Sun hardware to do “testing” on the official Java platform. So, for several years I had a top of the line Sun UltraSparc system with a Sun Creator 3D graphics card and beautiful display. This system was running Solaris of course. When I started the porting effort for Java for Linux, IBM also provided me with an IBM Power workstation (I think a 43P model) running AIX, as we had already ported the JVM to AIX as a “reference” for my porting work. My office was effectively the United Nations of UNIX and Linux systems!

One of my first forays into dealing with open source, licensing, and distribution happened during the porting of the JVM to Linux. On AIX and Solaris, the Motif library was used in the AWT graphical implementation, and for these licensed proprietary operating systems, the distributor of the OS had a license from the Open Group to ship Motif libraries. Linux distributions had openmotif but there were questions about whether it was properly licensed and distributable. We had to work with The Open Group and IBM legal to understand if we were approved as a licensee to distribute a shared Linux library of Motif inside our Linux JDK installation package. Thankfully people smarter than me worked out the details with the Open Group, and many years later, Motif was finally released under the LGPL.

Embedded Linux & Linux Distros

My time in IBM’s Linux Technology Center was focused on distribution partnerships (SUSE and Red Hat in those days; expanding to include Canonical/Ubuntu years later) including enabling Linux for all our hardware platforms. This included some interesting devices that were not standard x86 architectures. One of the key devices needing proper Linux support were our service processors for IBM POWER and z Systems high end servers. There were also other embedded devices which had been using proprietary OS components for many years prior to this Linux initiative.

The fun part of this job is that in the early 2000s many upstream Linux projects hadn’t enabled build support for embedded/non-x86 processors. At the time we were supporting MIPS, ppc4xx, and ARM as well as POWER and z (s390x) systems. Even if a project’s Makefile build system or the autoconf or autotools support was correct, sometimes the provided RPM spec file didn’t properly handle other architectures by default. So, my job was to review (and fix as necessary) hundreds of Linux distribution packages: enabling cross-compiler tool support, helping the autotools setup “do the right thing” regarding “guesses” for a cross-build environment, and anything else that needed proper enabling for non-x86 builds. Today much of this multi-platform enablement is cleaner in upstream open source projects, but at that point it was brand new and support was very spotty. Along with this manual work of package by package fixups, I also became an expert in building gcc for various build/host/target cross-tools combinations. While it wasn’t rocket science, it was fun to have that knowledge of something that had seemed like a black art at the time! These were enjoyable years working closely on these tasks with a great team of people, including Josh Boyer, who was the ppc4xx maintainer in the Linux kernel at the time, and later when on to be an active leader in the Fedora project, working at Red Hat.

Containers, Containers, Containers

Fast forward to my current work, and clearly I have been focused on Docker and the Moby project, the Open Container Initiative (OCI), and the CNCF containerd project. But when I stepped into that work back in August 2014 I had no idea how much it would become a part of my future work and career path.

I had some fun interactions with all kinds of developers in those early days of #docker-dev on IRC. I was a total newbie and was focused on trying to learn this new area of container technology. Looking back, it was an amazing cast of characters from Jess Frazelle, to Vincent Batts, Alexandr Morozov, Michael Crosby, and many others. But one interaction stands out above the others. At the time I had no idea who this “kelseyhightower” was who chatted in the IRC channel now and then. One day he starting asking about IPv6 support in Docker and how to get the ball rolling on figuring out a design and what work would be needed to make it happen. I had seen a patch come in from a contributor who had then disappeared and I was slowly trying to get that code ready for a PR. Here’s a log of the actual discussion that happened that morning in IRC:

IRC discussion with Kelsey Hightower

So, I had a chat about IPv6 with Kelsey Hightower long before I ever knew who he was and before I had ever met him (and maybe before many other people knew who he was)! But, looking back those were some amazing days of collaboration and camaraderie that are nearly impossible to ever be recreated, but I know many of us have great memories from those times and enjoy reminiscing when we see each other around the conference circuit. They were hectic days; so many releases, hundreds of PRs a week, new issues appearing every few minutes it seemed like, 24/7! It was a huge time of learning for me, but many of my prior experiences with Linux, with software development, and with debugging all came in handy to make it a very successful time for me as well.

Summary

So that’s my “Extended Cut, Geeks Version” of How Linux Became My Job! Hopefully a few of you may connect with one or more of these random details of my time in software development over the past two decades. Feel free to share your most interesting tidbit about your time in tech in the comments below!

]]>
https://integratedcode.us/2018/02/21/how-linux-became-my-job-extended-cut-geeks-version/feed/ 3 505
2017 Predictions Redux: Was Alexis Richardson Right!? https://integratedcode.us/2017/12/07/2017-predictions-redux-was-alexis-richardson-right/ https://integratedcode.us/2017/12/07/2017-predictions-redux-was-alexis-richardson-right/#respond Thu, 07 Dec 2017 17:55:48 +0000 https://integratedcode.us/?p=495 One year ago this week Nostradamus, I mean @monadic, better known off-Twitter as Alexis Richardson, CEO Weaveworks and the CNCF TOC chair wrote an interesting 2017 cloud native predictions post for vmblog.com. I found it interesting enough that I shared it around with several colleagues, one of whom challenged me to set a calendar reminder to revisit it one year out. That calendar entry hit as I traveled to KubeCon/CloudNativeCon in Austin, and I thought, what better time to review his predictions publicly. In the style of the ancients, I assume we will either burn him at the stake for heresy, or hail him as a god depending on the outcome. Per IBM legal: I’ll leave it up to the CNCF Code of Conduct experts to determine viability of any actions taken for/against Alexis. Let’s jump right in.

Container Wars are over.  Non-standard tools are now toast.

Peace breaks out between rival camps as customers decide they want to run Kubernetes on Docker in a fully supported enterprise-grade stack that actually works. Everyone else moves to support this or packs their software back into their luggage. The ecosystem of dev tools, add-ons and extensions finally takes coherent shape. The OpenStack and PaaS communities explicitly endorse the emerging Cloud Native stack.  Enterprise app stores begin to get traction, starting with Docker’s store.

This prediction is hard to score. In one sense, Docker’s announcement of Kubernetes adoption/support into desktop and enterprise edition products definitely underscores a huge break in the tension in the orchestration “wars” if there was one. The OCI 1.0 milestones for both runtime and image spec, long-awaited to deal with differing opinions and implementations at the lower runtime level, was also a huge win for 2017. However, as painful as it is personally to accept, we still have too many questions about whose runtime is the “winner” from the containerd, cri-o, rkt set. And new entrants like Kata, although not necessarily a new runtime as much as an OCI implementer like runc, potentially cloud the picture for customers. The good news is that with the CRI in Kubernetes, actual end customers shouldn’t have to care who “wins” the container runtime wars. Stable and boring are winning, and that’s good for customers. To the point on PaaS and OpenStack, the Kata announcement as well as Cloud Foundry adoption of OCI and the PKS announcement from Pivotal are dead in line with predictions from Alexis. Looking good so far. Next:

Rise of the Container Cloud

Using the emergent Cloud Native stack, all cloud providers sell container hours, as well as VM hours.  With this in place, each cloud provider takes steps to fully integrate containers with their network, security, management and data services. This makes containers into a “first class citizen” for cloud applications, and accelerates adoption by cloud customers. However, customers want more than this – Cloud Native applications that can run at scale on any infrastructure.

While IBM Cloud and Joyent were already offering containers as a first class object before 2017, we can admit that neither took the industry by storm. As all major clouds jumped to offer managed/public Kubernetes offerings, finalized by AWS announcing EKS at re:Invent last week, we did see interesting projects like Azure Container Instances and AWS Fargate come to life in 2017. With a significant amount of serverless activity blurring the lines, using infrastructure painlessly via containers and function-centric computing is definitely a 2017 thing.

Cloud Native washing breaks out

Customer demand for Cloud Native, and freedom from lock-in, leads to more solutions for enterprises that cannot move all their applications to the cloud immediately. Private enterprise application and container platform vendors fight for leadership – Docker, Pivotal and Red Hat shouting the loudest. Cloud Native follows the same hype cycle as big data and cloud before it. Meanwhile, developers ignore all this, and pick their own tools from the emerging Cloud Native landscape.

Well, if nothing else we can agree that the cloud native landscape is continuing to grow like crazy and developers are picking their own tools from a dizzying array of possibilities. Whether you are a fan of the official landscape view via the CNCF or not, as Dan Kohn continues to show conference to conference with ever-smaller-icons dotting the image, its a difficult job to keep the landscape up to date! Enterprise vendors are definitely in the game to capture enterprise customers where ever they are on the timeline of “cloud adoption.” From IBM’s IBM Cloud Private (ICP) offering for on-prem Kubernetes, to OpenShift from Red Hat, to the MTA program coming from Docker alongside their enterprise partners, the fight is definitely on.

CNCF Cloud Native Landscape, v1.0

Compliance takes center stage

Political and economic nationalism lead to a far greater emphasis on data protection and repatriation. While enterprises seek to benefit from the flexibility of Cloud Native application architectures, their customers will want a clear privacy, security and data ownership story. This cannot be trusted to zombie operations teams using a pre-cloud compliance playbook. Vendors who can deliver “security that moves with your app” will benefit.

Security is still a huge area of activity across cloud native. The vendor space is crowded and getting more so, not to mention public clouds looking to move from “we have containers” to “we have containers and we can provide infrastructure with FIPS 140 certification, or EU data privacy (GDPR) segmentation, etc. etc.” Per the security that moves with your app comment, Grafeas and the Moby Project’s libentitlements work are both quite interesting projects announced late in 2017.

Big acquisitions as Cloud and Enterprise come together

When combined, these predictions imply acquisitions in the next 24 months. The big cloud providers hold all the cash, and the enterprise incumbents have the customer base for the next stage of cloud adoption. Google and Microsoft will probably make the banner plays, with Red Hat, Canonical, Docker and Mesosphere as the biggest prizes.  IBM may also buy Pivotal from Dell.

Alexis wins this round with a “24 months” hedge! We’re only halfway through, and although in my opinion major acquisition activity has been very light apart from Deis to Microsoft, he gets a 12 month reprieve on a score here. And, no, we didn’t buy Pivotal, yet.

Summary

Overall, I’m quite impressed with the foresight of a year ago. Impressed enough that I’m wondering if Alexis should consider a career move towards sports betting or a Vegas relocate for Weave. More seriously, 2017 was a very interesting year for cloud native, and the activity and hype around all things cloud, containers, and serverless seems to be pushing higher, unabated by the turning of another year.

]]>
https://integratedcode.us/2017/12/07/2017-predictions-redux-was-alexis-richardson-right/feed/ 0 495
Moby Summit: Serverless, OpenWhisk, Multi-Arch!? https://integratedcode.us/2017/11/21/moby-summit-serverless-openwhisk-multi-arch/ https://integratedcode.us/2017/11/21/moby-summit-serverless-openwhisk-multi-arch/#respond Tue, 21 Nov 2017 16:25:52 +0000 https://integratedcode.us/?p=471 The day after the usual fun and excitement of DockerCon has traditionally been open source contributor and maintainer focused. With the announcement of the Moby Project back in April at DockerCon Austin, this post-DockerCon event is now more formally named the “Moby Summit” and getting bigger and better each time. In Copenhagen a few weeks ago, we had the fourth iteration of the Moby Summit and I was able to represent both the containerd project as well as a follow-up to the Serverless Panel hosted during DockerCon with a 15 minute slot on OpenWhisk and IBM’s approach to FaaS and serverless computing.

Read about and watch the three serverless talks given at Moby Summit in Copenhagen in this Moby Medium blog post.

Admittedly I had already given away my lack of deep knowledge on OpenWhisk during the serverless panel, so I made up for that by providing a more clear description of the OpenWhisk architecture here. Also, I provided the delineation between open source and IBM’s cloud: IBM Cloud Functions are IBM’s public instance of the OpenWhisk Apache Incubator open source project hosted and connected up to our cloud services, from Watson to the Weather Company, as well as a broad mix of data and storage services.

Given I only had fifteen minutes, I made this talk a mash-up of 1) a quick OpenWhisk and IBM Cloud Functions overview, 2) a brief revisit of my bucketbench talk from Moby Summit L.A., and 3) a quick demo of my personal use of serverless functions to provide an easy way to query multi-platform image support from any Docker v2 API-supporting image registry.

You can watch the talk here, and I’ll add a bit more detail on each of these components below.

OpenWhisk/IBM Cloud Functions

I’ve already said most of what’s necessary in the opening paragraph. IBM Cloud Functions is the IBM public cloud offering analogous to Azure Functions, AWS Lambda, and other public cloud serverless offerings. IBM Cloud Functions, similar to other offerings has a specific serverless pricing model and built-in capabilities for logging, monitoring, function management, triggering, and some new capabilities around function composability. You can read more about these new features in this IBM blog post. All of this is built on top of the Apache OpenWhisk open source project, originally built by IBM and contributed to the Apache foundation with partners like Adobe, among others. For more getting started-level content, please see my colleague Daniel Krook‘s “functions17” repo on GitHub for lots of great content.

Bucketbench

I’ve spoken about my bucketbench project a few times this year, and wrote a blog post on the project this past summer. If you like moving pictures better, you can see a recording of my presentation on bucketbench from OSCON’s Open Container Day back in May 2017 (slides). The intent of bucketbench was to have a simple framework to drive container lifecycle operations (scaled as desired via container count and concurrency) against any desired container runtimes, with the purpose of benchmarking for comparing/contrasting runtime performance. The driver interface was pluggable and started with support for Docker engine, containerd 0.2.x, and OCI’s runc. It has since grown to include containerd 1.0 via it’s gRPC interfaces/client library, and very recently through the contributions of Kunal Kushwaha, the ability to drive any Kubernetes CRI-based runtime via the CRI gRPC API endpoint.

The “why did you create that?” is clear from the talks I’ve done and my prior post, but I’ll elaborate a bit here. The IBM team which created OpenWhisk, an open source serverless framework that is now an Apache Foundation incubator project, was trying to investigate the best possible runtime scenario for executing functions packaged as Docker containers. Given they had heard about the layers of the Docker engine, including containerd and OCI’s runc, they wanted to pursue understanding the performance trade-offs of different scenarios–e.g. higher levels of contention, significant use of pause/unpause lifecycle operations–per each runtime. They had hardcoded some level of performance benchmarking in a script, but it seemed reasonable that others would want to perform similar tradeoff exercises, and so bucketbench was born!

As mentioned above, one of the most recent improvements that had a lot of interest was the ability to drive CRI implementations and not just “raw” container runtimes. Kunal has recently been using bucketbench to do some initial runs against CRI implementations:

I’m still very interested in feedback on bucketbench from others. What’s useful about it? Tell me what isn’t useful or could be improved so it becomes a more useful utility. I’m using it these days to test each driver of the containerd 1.0 release to understand if we have made any impact on performance or stability. I will be posting the results from the beta series soon in the bucketbench GitHub repository.

Multiarch/Serverless Mashup

In the final minutes of the talk I showed off a recent use of IBM Cloud Functions/OpenWhisk to provide a tool to answer a common question: now that the official DockerHub images are all ‘manifest lists’, how do I know which images support with architecture/platform combinations? My manifest-tool utility can do this, but for everyone to answer that question for themselves means installing that tool on their local system(s). Instead, I wanted an easy way for anyone to be able to make a simple HTTP request and get a list of supported architectures/platforms for a specific image name and tag. Given as I said, manifest-tool can do this work, and the fact that IBM Cloud Functions allows functions to be packaged as Docker containers, I could simply package manifest-tool in a container image and wire that to a function name in IBM Cloud Functions! But, I didn’t stop there. Given that the output of manifest-tool is a bit overwhelming, I wrote a second function that processes the JSON output from manifest-tool, pulls out only the architecture/platform details, and then caches that data in a Cloudant NoSQL database so that repeated queries for the same image don’t re-check the image registry for the same details.

You can check out the simple demo in the video, but all the code, including how I build a client for the query functions as a multi-platform supporting image is all on display in my mquery repo on GitHub. Using the existing mplatform/mquery image on any Docker supported architecture, you can simply query any image in any registry, anywhere you have Docker installed, like:

$ docker run --rm mplatform/mquery golang:latest
Image: golang:latest
* Manifest List: Yes
* Supported platforms:
- linux/amd64
- linux/arm/v7
- linux/arm64/v8
- linux/386
- linux/ppc64le
- linux/s390x
- windows/amd64:10.0.14393.1884

The power and simplicity of serverless for this kind of use case is clearly evident. The kind of queries to talk to a registry and answer this “multi-platform question” does not require a long-running server. I don’t have to manage uptime, OS patching, or worry about whether my functions will run when someone performs a new query. I don’t have to worry about dealing with scaling due to load: if 10, 100, or 10,000 people all decide to use my “mquery” tool at the same time that’s for the serverless platform to handle. The chaining between the filtering and cacheing function and the execution of the manifest-tool function itself allows me to manage each as a separate singular focused entity–if I want to change the UI output, I only have to edit the filtering Node.JS function code and leave the other function untouched. I find myself definitely agreeing with Kelsey Hightower on this one:

So, that’s my brief summary of my whirlwind Moby summit talk on “serverless.” If you are interested in the multi-arch talk I gave with Michael Friis at DockerCon or the serverless panel you can find links to them as well as other related content below.

Related Content:

]]>
https://integratedcode.us/2017/11/21/moby-summit-serverless-openwhisk-multi-arch/feed/ 0 471
DockerHub Official Images Go Multi-platform! https://integratedcode.us/2017/09/13/dockerhub-official-images-go-multi-platform/ https://integratedcode.us/2017/09/13/dockerhub-official-images-go-multi-platform/#comments Wed, 13 Sep 2017 06:33:55 +0000 https://integratedcode.us/?p=465 So, every once in awhile you get the immense pleasure of seeing an idea through from start to finish. Multi-platform container images may not be exciting for everyone, but it’s a topic I’ve been thinking about and working on since a team at IBM first approached me about helping figure this out in November 2014.

I was a relatively new contributor to the Docker engine (a few months of PRs under my belt), and the Golang Docker “v2” distribution project wasn’t even out yet! We were poking around the python code of the “v1” registry trying to figure out the best way to support the image format, knowing we had upcoming ports of Docker engine for our POWER and z Systems (a.k.a. the IBM LinuxONE) platforms. We wanted all users of the Docker engine to have the simplicity of that docker run redis experience no matter the CPU architecture.

Fast forward a year and lots of the image format details had been worked out (the v2.2 image spec was close to finalized), and I put up this forward looking slide at the end of my lightning talk at DockerCon EU in Barcelona:

DockerCon EU, Barcelona, November 2015: Lightning Talk on Multi-arch Images: “What Should Happen?” slide

It seemed pretty simple from here: the engine will do the work, find the matching image entry and run that container image after assembling/downloading its layers! We’ll skip all the blood, sweat, and tears from that point until now, but as of today we are no longer “Looking into the future” as that slide stated. Today, September 12th, 2017 you can simply type docker run redis on seven different Linux OS architectures (64 and 32-bit Intel, 2 variants of arm 32-bit, arm 64-bit, ppc64le, and s390x) and you will have a running redis server!

Along the way, I wrote a tool to do this assembly of what became known as “manifest list” objects in the v2.2 image spec. You can find out more about manifest-tool from its GitHub project page, or from my blog post on this topic from last April. Lots of people have contributed to making this tool more useful along the way, including Lucas Käldström (better known as @kubernetesonarm on Twitter), the LinuxKit team (who already went “multi-arch” several weeks ago), and of course Tianon and his merry band of official image creators at InfoSiftr.

The good news is that in addition to the manifest-tool project, work is well underway to have a Docker client command to interact with manifest objects (including manifest list creation); you can find a lot more of that discussion and implementation in docker/cli PR #138. Thanks to IBM colleague Christy Perez who took up the monumental task of creating more than one PR/implementation of a brand new client subcommand and handling all the review, updates, rebases, and commentary that has generated since! You can read her blog post on creating multi-arch images, and definitely check out Christy and fellow IBMer Chris Jones’s talk from DockerCon Austin earlier this year on the topic.

Of course, more work is still underway related to the UI elements of DockerHub visually showing information about manifest lists, as well as broadening support of official images across architectures (and platforms, too!) Speaking of platforms, Windows is part of this multi-platform family of support via manifest lists. Fellow Docker Captain Stefan Scherer has a great talk on this topic via Slideshare. Remember that for an image on DockerHub to have a full slate of entries in the manifest list, it must be buildable/supportable across those CPU architectures and platforms. So, IBM, ARM Ltd, Raspberry Pi enthusiasts, Microsoft, and many others are still at work building out further support across the popular and official open source software images.

So, how can you tell if your favorite image has support for your platform or architecture of choice? Today the easiest way is to use the manifest-tool inspect command against a repository/image reference (or fully specified registry URL and image in the case of private registries) and retrieve a listing of the supported platform entries within that image, if it is a manifest list object. This capability will also be available in the Docker client when the earlier mentioned PR is finalized. See the releases page of the manifest-tool to easily download a pre-built binary for your platform of choice.

Finally, a few more words of thanks. I started bugging the original Docker distribution team about this topic several years ago. Initially Olivier Gambier was kind enough to get a regular call started with a community of interested parties, which then led to me harassing Stephen Day and Derek McGowan on a regular basis to get the v2.2 image spec finalized, agreed to, implemented, and ready for use. Thanks additionally to Aaron Lehman who ended up doing the initial work/PRs for handling manifest lists in the registry code and the Docker engine.

The good news (well, for those not tired of the topic yet) is that I won’t stop talking about multi-platform image support quite yet, as Michael Friis and I are speaking at DockerCon EU next month in Copenhagen on this very topic: “Docker Multi-arch All The Things.” We’ll be showing off the open source and Docker Enterprise enablement of multi-platform clusters with hybrid nodes using Windows, Linux on amd64 and Linux on z & POWER systems!

I’m really excited to watch and see how today’s next big step forward in enabling multi-arch and multi-platform support impacts the viability of container native solutions across more architectures and platforms as we head into the future.

]]>
https://integratedcode.us/2017/09/13/dockerhub-official-images-go-multi-platform/feed/ 7 465