Technology
Modern Software Decentralized Web

July 17, 2018
Exploring the Progression of Modern Software (Part 3)

If you’ve been following along in this series we’ve been discussing the progression of modern software. We’ve tracked the progress of how software has been created and developed over the course of the last 50 years and we’ve dug in deep on some key philosophies behind this journey. If you haven’t read the previous posts I’d highly recommend you read, Exploring the Progression of Modern Software Part 1, and Exploring the Progression of Modern Software Part 2 before continuing in this post. Trust me, it will help make the following sections far more relevant and interesting.

Now, I know you have been on the edge of your seat (or at least I imagine many of you have been) waiting to hear how I conclude the cliffhanger where we left the last post. Before I answer those mysteries and resolve the questions in what I suggested to be an extremely elegant manner it is important to recap what we’ve learned up until this point.

The Vehicle

First, in Part 1 we discussed the software vehicle. In this regard I refer to the idea that there are several tools by which software could be accessed, this was on a single machine, on a server (or cloud) environment, and lastly in a decentralized manner. These methods are the vehicle by which software could be carried to the user.

The Engine

In Part 2 we explored the next important concept around the evolution of software and we called it the software engine. This engine was how software was powered and we identified three engine-types: closed source, open source, and blockchain. I know to some the third engine is an interesting one because it is not often perceived as different from closed or open source but there are some unique properties of blockchain which I believe warrants the label of a unique software engine.

Part 3: The Driver

This means so far we have covered the software vehicle, and the software engine. Both of these aspects focus on what the software company has created or envisioned for their software. But there is an extremely important part of the equation we have not yet addressed. The driver. There must be someone behind the wheel of the software. This is the software user and although the user has been previously overlooked in the software development process (relatively) they are the focus for this third and final post in this series. Let me explain why the user is so critically important.

The purpose of software

The user implements the software and dictates the purpose of the software. In essence the software should service the needs of the driver, carrying them to their destination in the fastest possible way. Software was created to serve the user. This was the original intent and yet with each iteration this goal became a bit more obscure. In fact, we quickly lose site of the original goal of software and instead we witness the pursuit of a variety of sub-goals. As our technology has improved from a hardware perspective (which is an interesting side study) we have shifted our software focus from the primary objective. But let’s continue our study now regarding this user aspect of the progression of modern software.

The No Data Era

This is the era where it all began, just as with the other posts in this series it is important to keep in mind the overlap between this era and the others (in relation to all three components: the vehicle, the engine, and the user). It is also important to remember the purpose of software as we go through this journey and as we consider the use (or misuse) of data throughout each era. If we hearken back to the earliest days of software we are also hearkening back to the earliest days of hardware as well and thus the hardware of the day is of equivalent importance. These early computers were physically incapable of storing information. They were simple functional machines. Then we witnessed the migration to tape-based programming and the concept of punch-cards. During this period as well the concept of in-memory storage or the stored-program computer was non-existent.

If we stop and think about it for a second what we can easily understand then during this era there was no data storage. This was the No Data Era. As the hardware became better and improved over time we slowly saw the incorporation and creation of stored-program software. These computers held on-board memory which was capable of recalling specific protocols or routines and executing them (let’s not forget to mention the size of these beasts either! Large refrigerator all the way to large rooms were the norm.)

During the No Data Era there was no thought (or rather no mainstream thought) surrounding the storage of personal data. The focus was purely on stored procedures and the ability to store functional information. The no data era was in essence completely private, held no concept of singular vs multiple storage locations, and as a result infinitely secure given there was nothing to secure!

But things move fast in tech and the No Data Era didn’t stay devoid of data for long. Soon the hardware was ready for the software to progress. And progress it did – rapidly. Soon software could be written to perform vast and complex functions and eventually multi-routines and finally completely software systems to carry out all manner of tasks. These tasks required data entry by the user in many cases and then the software would store this data locally in-memory to be used in later computations. Throughout all of this proliferation of software a few of the core tenets remained. This era can be summed up easily with the following three words.

Three Word Takeaway: Single, Private, Secure

The Social Data Era

As I alluded to, this software progressed quickly. Very quickly. At times I can’t help but wonder if this progression occurred more quickly than we anticipated or prepared for. Moore’s Law defined the rate at which the hardware could be improved and the software was quick to accommodate this frenetic pace of development.

Along this software progression the data collected continued to grow. In fact, due to the improvements in storage capacity the software expanded its data storage as well. Soon we reached an inevitable tipping point. I’d suggest this tipping point occurred when the cost for data storage became so inconsequential many neglected to even consider the amount of space their data might consume. Software companies formed to create advanced software systems. These software tools were light years more advanced than their predecessors from a previous era. This mass data consumption became the sole focus for larger companies as they began to correlate the value of data consolidation with direct monetary benefit.

In this era, software became once again the vehicle but in this instance, a vehicle for a different cause. Rather than a vehicle by which the user could move faster to accomplish a task, the software became a vehicle simply to collect and return as much data as possible to the company providing the software. These companies soon learned if they were to give away the software they could raise adoption rates and increase the data collection. They began to place the value of the software in the data collected rather than a monetary fee for usage. But the user was unaware of this subtle shift in philosophy. Instead, the user merely saw the monetary cost of the software decrease until in many cases the software was free of all fiscal cost.

Do not let the subtle nuance of that previous sentence escape you. Fiscal cost is the key word. You see, the cost never left the software, it merely changed forms to a currency the user didn’t recognize. The user paid with their data instead of their dollars. (Or Euro etc…I use dollar for the alliterative effect alone.) This is the Social Data Era. We live still for the most part in this era.

In this era we see three key principles which as you may now begin to imagine correlate to others in previous eras. They are as follows:

Three Word Takeaway: Many, Semi-Private, Semi-Secure

I’m being highly generous, too, with the notion of semi-private. In essence this is the era of public data. The progression of modern software has led into an era where the user has relinquished their potentially most valuable and yet misunderstood asset – their personal data.

The Personal Data Era

All of this leads us into what I believe is the most exciting era we’ve ever known. There’s certainly much required for us to fully realize this era and see the true value come from this time in modern software but the rewards are many. A veritable plethora of opportunity awaits us, a type of utopia where the user and the software company are both capable of finding success.

I’ve hinted at what this era holds in both previous topics as well as in the previous section in this post. The answer sounds simple but the progression to achieve it may be far more complex. The answer is personal data. Privately held data by the user with a true understanding of the value and worth of this data as it should be properly attributed. There are a few reasons why this next era in modern software progression may be seen as a difficult progression to be made. Allow me to articulate on them briefly here:

The User Learning Curve

Humans are sometimes a stubborn creature of habit. We form opinions or hold beliefs about things and then obstinately refuse to re-evaluate until we are obligated to do so by some external force. This is the situation we find ourselves in when considering the value and use of personal data. Although I would suggest recent developments, such as GDPR, have served well in encouraging this transition in our internal thought processes, there is still a long way to go before the awareness and realization of the true wealth found in personal data is recognized by society as a whole.

This learning curve is steep and fraught with difficult points. Items such as the continued availability of “free software” where software is void of fiscal cost yet provides powerful features and addictive appeal make this acknowledgement of the true cost difficult to reconcile. The user doesn’t want to sacrifice the supposed free dopamine delivery system for the sake of data. I think of this challenge similar to the detox treatment a drug addict must undergo when they desire to “get clean”. Users must begin to recognize they are paying with their digital lives each time they seek another “high”.

The Software Company Profitability

The second struggle which occurs when attempting to shift thinking into a new software era involves the software companies. These companies, as outlined in the previous section, have built their profitability models around the consumption of user’s data. They have employed every trick possible, some even dipping into “dark patterns” in their attempts to maximize data collection.

Interestingly enough the software companies have deemed the data to be the valuable asset to be collected while the user has dismissed the same. Ironically, the data is far more valuable to the user than it is to the software company! I know there will be some who would disagree with this point but rather than belabor the point in detail I’ll provide a very short explanation for my stance on the subject.

Software companies have long held data to be the pinnacle by which they measure their success and their profitability. They suggest the larger the dataset the better their ability to maximize a positive sales outcome. One need only look at the weak value provided by machine learning, the increasing troubles regarding signal vs noise and the concept of dirty data to see this perfect picture is marred with the practical reality of the situation.

My coup d’etat in this argument lies in the recent news shared by Google regarding their Alpha Go project. After many successful and increasingly intelligent implementations they have arrived at a software variant exponentially more powerful than every predecessor. And the most damning revelation to those data-whoremongers lies in this single revelatory discovery: The software performed at these superior levels with zero pre-defined datasets. The conclusion is simple, the data was not the key for the software company, the algorithm was the solution.

I find this a fascinating and beautiful twist of fate, the very thing first explored during the creation of software, the algorithm and functionality, has once again come back in this perfected modern software.

The Future of Personal Data

From here the conclusions are easy to draw. The writing on the wall is clear: when the data is owned by the user and used by the user they are able to leverage their personal data for their own wealth and success. The data is truly the means by which an individual can increase their worth. And this can be done without negatively impacting the software companies who now find their true value to be in the algorithm and software functionality. They are able to charge appropriately fiscally for access to their superior algorithms, processing capabilities and functionality.

How poetic that the same data which is now hoarded, unused, and inefficiently manipulated by the software company and neglected in value by the user is actually the perfect currency when the roles and positions are reversed. What a beautiful juxtaposition.

This journey in modern software progression doesn’t end here. There are definite outcomes and results from these findings which lend itself to identifying the means by which modern software progresses. The direct implications of this understanding lends directly to what the future of software looks like. Even more exciting, I believe this defines the future of our digital economy as well. And that idea is world-changing.

Modern Software Decentralized Web

July 10, 2018
Exploring the Progression of Modern Software (Part 2)

Okay, I am so excited to get back to this topic. If you haven’t yet read the preceding post in this two-part series, then I’d recommend reading Exploring the Progression of Modern Software (Part 1) before continuing. I’ll wait. Done? Welcome back. That post gave us an incredible first half to this topic and I was absolutely gutted to have to leave it before I could finish my thoughts on the subject. But it’s Tech Tuesday again so I am now free to publish Part 2 in this short two-post series.


As a brief recap in case you didn’t take the time to read Part 1 (I know who you are) or if you read it last week and need a refresher before continuing here is a short synopsis:

We examined the history of software progression from the earliest days and we focused on three specific era’s in the evolution of modern software. Those areas are:

  • The Personal Computer Era: In this era we examined briefly how in the earliest days software existed on a single machine, or single location software. The three words we associated with this era were: Single, Private, Secure.
  • The Server Era: At this point software evolved into existing on a server instead of a single computer and as a result we called this: a many-location software system. The three words to think about with server software were: Many, Semi-Private, Semi-Secure.
  • The Decentralized Era: Lastly, we discussed the current and future-focused view of decentralized software labeled as many-location software platform, both private and secure. And for this era we chose to focus on the following three words to highlight what the potential could be: Many, Private, Secure.

Okay, now hopefully we’ve had a nice convenient reminder about what Part 1 was focused on exploring; in essence the vehicle which held the software. In Part 2 we want to now explore not the vehicle, but the engine by which the software runs and the evolution which took place “under the hood” so to speak.

I’ve broken this part into three convenient sections as well. Each of these era’s relates slightly to the corresponding era from Part 1. Without any further delays let’s jump right into the first and oldest era.

The Closed Source Software Era

Many of you may balk slightly at my definition of these eras as there is clearly overlap in the world still today across all of these, but as with the vehicles of box, server, and decentralized, so too the engines overlap and there are times when several different models exist at the same time. Regardless, what I would like to explore is a generalization of the common mindset and thinking in the business world over time. I am the first to recognize the early successes of open source and am certainly not trying to start any flame wars about what came first or was more dominant. (Hopefully that is enough of a disclaimer to subdue any angsty commenters!)

Closed software was the go-to solution for businesses both from a monetary standpoint as well as a trusted implementation due to the accompanying support and trusted “business model” of the time. The perceived values of this era included the ability to have a single vendor responsible for the software, trust that the information was held privately by a corporation and as a result was somewhat secure given the “closed source” nature of the code. (Notice this is regardless of the vehicle being either box software or server software, traditional SaaS.)

The three words to sum up this era are therefore best described as: Single, Private, Secure

Exception: However, there’s a slight caveat with the last point of “secure”, as the only reason for this perceived security was due to the closed source nature of the code and accompanying data. There was an unfounded, misconception this closed source somehow equated to “secure”, however, as the reports of hacked SaaS platforms grow increasingly more common this point tends to falter a bit more heavily. Hence I’ll leave this as a “perceived” benefit. Simply put, I believe this is a case of “what you don’t know won’t hurt you” – if the bugs aren’t shared, you don’t know the bugs exist.

The Open Source Software Era

Next what we see is a gradual shifting and acceptance of the open source software model as an acceptable business model and software solution. Both for companies seeking to generate revenue selling services, support, and software around open source solutions as well as enterprise and others trusting open source software as stable for implementation in their business.

Again, I won’t belabor the point that this era co-existed with the previous, I’ve said enough on that already. Simply put though, there are plenty of news articles, blog posts, and media sources to demonstrate the growing acceptance of this era of software as fundamental and beneficial for any SaaS company. The benefits for open source included the ability to incorporate the talents and skills of a larger community of engineers or developers focused on modifying and improving the code; the open nature of the code allowed for bugs to be identified quickly and patched faster than in closed source as well as the perception this could be done better by a global audience than a single company, and open source allowed code to be taken and used in a variety of alternative environments.

As a result the three words we could use to sum up this era include: Many, Semi-Private, Semi-Secure

The Blockchain Era

This is where things get interesting. As with Part 1 we are now about to move into an era which is only just beginning to come to fruition. We are on the cusp of something new and revolutionary. As a result the points I’m about to share may be controversial. You may disagree, and that’s okay. I’m not suggesting anything I say to be fact, but I believe, based on my experience (limited as it may be), there are signs which can be clearly seen and we are best when we learn from our past and use our history to make intelligent decisions about the future. Given my experience in open source and the ridiculous, countless hours of study and research into the subject I believe what we are seeing is indeed the beginning of the future. The next era in modern software has begun.

Note: Due to the prolific number of well-written articles on the topic of blockchain and the overwhelming volume of information available I have not included links throughout this section of the post. I trust you to go spelinking on your own for more data. If you have questions, or would like my opinions on where to start, let me know and I’ll be happy to point out a few great sources I believe get it right.

 

I’d like to start as I did in Part 1 with the three words which best describe this modern, future, software “engine”:

The three words associated with the blockchain era are as follows: Many, Private, Secure.

Let’s explore what each of these words relates to in this new and intriguing era.

Many

When we consider the topic of Many in the context of the blockchain era, we find there are quite a few similarities to the open source software era. This is in part due to the closely related nature of the code. In both eras the code is available (predominantly) as open source able to be viewed and modified by anyone. This means the software can experience all the same benefits as traditional open source software. But there are additional benefits as well. Unlike open source software, blockchain software, or DApps as they are coming to be called, are decentralized apps. This means not only is the code worked on and contributed to by many; but the software itself can be run by many. This achieves the maximum potential benefit imbued in the concept of “many”.

Private

The second word we’re associating with the blockchain era is Private. This point actually has a few potential beneficial futures available. First, we find blockchain software has the potential to be run in a multitude of environments (including in a private blockchain). Second, blockchain software even in the main blockchain has incredible opportunities to be private in nature, depending on the final implementation of the protocols identified by the point below. Which leads us then to our final word….

Secure

Lastly, we find that the potential for a highly Secure era is beginning to be identified. This point is tricky because in the earliest iterations the blockchain era follows many of the same paths as the open source era. But as the various protocols are defined in more detail we are able to recognize those shortcomings and improve on those failure points for a more secure and highly encrypted software infrastructure. This software holds the keys to a potentially (virtually) unbreakable encryption level.


While there are still a fair number of questions surrounding blockchain software and the development of DApps, I am confident we are experiencing the next era in modern software progress. The future of software will come from these explorations.

I realize now there is actually a final point to be made in this series, one which I believe holds serious consequences for modern software developers, implementers, and SaaS businesses everywhere. In fact, I believe this final realization holds incredible impact for existing software businesses and calls into question a terrible practice we have incorporated without even a second thought throughout all SaaS companies.

The ramifications of this blind de-facto choice are far-reaching and highly devastating. Recognizing this fallacy helps us bring the problem to the surface and then allows us to resolve it in an extremely elegant manner. Check in next Tuesday for the final installment in this series.  

using javascript to improve the user experience

July 3, 2018
Improving usability with simple JavaScript

If you read my blog much or listen to the podcasts you know I tend to talk a lot about active listening. (In fact, I just referenced this Sunday.) But the idea of active listening is only the first step in this journey. Beyond the act of listening actively you need to follow-through with the next step. I consider this next step equally important. This next step is applied listening. This is where I take the listening I’ve been involved in doing and actually use it to affect something I am doing. I apply the knowledge I’ve gained. 

Oh, but there’s lots of room for learning still, and today is no different. What you’re about to read is my Tech Tuesday post. Last week we dug in deep and explored polynomial code computing. I’ll save you the mental struggle of wading through another concept at the same depth this week and instead explore a more applied technology. In fact, we’re going to take things extremely simple this week and look at something I wrote over the weekend. 

The idea is simple. I wanted to take my applied listening and do something with it for the purpose of making this blog in particular easier and better for my readers. 

The idea: applied listening

Real life example coming at you. My blog posts usually come in at around 1,000-1,200 words with some going even longer. That’s a lot to read, not necessarily when taken individually, but when put into the context of a week’s worth of daily posts…it can be overwhelming, and possibly a bit daunting. I was faced with a dilemma. The depth of each post is important, and there’s valuable information I’m conveying typically without demonstrating an unnecessary verbosity.

But not everyone has the ability to devote the time required to read a long post each day. In fact, my best friend once mentioned no matter how much they hoped to be able to, they could never keep up with it all. And this resulted in a negative experience for them! The exact opposite of what I hoped to accomplish. I want my readers to feel inspired, motivated, and most importantly in control of their time. When the length of my post dramatically and directly contributed to the opposite effect I felt I was the one failing them!

I wanted to find a way to resolve this conflict and provide a better user (reader) experience while at the same time not sacrificing the quality or content of my message. This leads to my proof of concept below.

The proof of concept:

There are existing plugins which will report the average reading time of a post. These are somewhat helpful in providing information to the reader about the length of time to anticipate a particular post requiring. However, in my opinion this reading time message is merely passive usability. I’ve written a good deal about the notion of active vs passive. (Don’t get me started on this in regards to artificial intelligence!) I call this passive usability because the basic message is merely, “Here’s what’s happening, deal with it.” Somewhat beneficial but not necessarily proactively helpful.

Instead, I’d like to draw your attention to the top of this post (if you’re not reading this on the actual post page, click through to the single post instead of the homepage). As you can see, my subtext is slightly different and a bit more specific. There’s an included link asking a question – got less? I believe these two little words and the included functionality take this usability from passive to active. What you have now is active usability because the message now says, “Here’s what’s happening, want to change it?” See the difference? Beneficial while also empowering and proactive.

At this point I was going to tell you to try it out. However I am 99% sure the minute I referenced the subtext in the previous paragraph you’ve already played with the technology and seen what it does. I hope your first response is delight mixed with a hint of intrigue. If that’s the case then I’ve been successful in changing the experience to a positive “reader experience”. 

Okay, but like all magic tricks, this JavaScript to improve usability is so simple you’ll shake your head when you learn how it’s done. But I don’t want to keep the secret to myself. I love to empower people and help them learn new things for their own benefit. Or as I’ve shown with Mautic I like to reveal what’s behind the curtain. (Open source for the win!) Keep reading to learn how it’s done.

The prestige

I admit it, I stole this section header from one of my all-time favorite movies. In essence, the prestige is the secret; it’s how the trick is done. So, here’s how this JavaScript trick is done:

  • First, I’ve written my post in its entirety as I normally would, then I use a special toolbar formatting option I wrote in the editor that allows me to wrap words, sentences, and paragraphs of text in span tags. Each span tag includes a special class name, such as, level-10level-25level-50level-75, etc… any digit between 1 and 100 can be used in association with the level- portion of the tag.
  • The second step implements a rather standard jQuery UI slider element (I’ll admit this was the first time in a very long time that I used jQuery UI…I almost didn’t believe it was still actively used!). This slider UI begins at 1 and has a max value equal to the total reading time of the post.

Side note: Total reading time as I mentioned previously is easy enough to figure out using an average words-per-minute read time. Nothing super special in here honestly. It’s a basic equation.

  • The final step involves using a little JavaScript fadeToggle action to show or hide the spans based on the level- XX of the particular location of the slider.

It’s really that easy. 3 simple steps and you have a “surprise and delight” experience for your readers. But since I’m all about the value of time and the essence of simplicity and convenience I wrote a plugin to perform all this work and all my job consists of is merely selecting the appropriate spans from the toolbar, the plug-in does everything else.

And finally, let’s open source everything.

Of course I plan to open source this plugin so everyone can see the code and have a go at it…and hopefully make it better! Before I do there are a few things I’m still improving before I want to share it, basically cleaning up the code and implementing something I added just yesterday (take the page url and add an “anchor” such as #3 to be automatically given the 3 minute version of the post). It won’t be too much longer and I’ll share the code and I’ll be sure to post an update so you can try it for yourself!

Have a great Tuesday and remember, simplicity is key, sometimes the best usability is also the easiest to create. Finally, remember sometimes what looks like a magical user experience only takes a few lines of code and a little bit of extra thought.

Modern Software Decentralized Web

June 27, 2018
Exploring the Progression of Modern Software (Part 1)

I wrote recently about learning from the past and I’ve spoken about it on a podcast a couple of times about the lessons we can learn from looking backwards as we prepare for moving forwards. And as I try to limit my future-thinking in my writing (hence, the What’s Ahead Wednesday series) I think about putting my own comments into practice. What does it look like to examine our past as we set our sights on tomorrow? What can we learn? More importantly what trends can we see which when extrapolated allow us to predict the future.

In this post I want to share with you what I’ve found to be one of the greatest personal revelations on this topic. I was talking about this with a friend the other day and the notion dawned on me mid-sentence. After I finished I went home and I mulled over it for a while. I rolled the idea around and I played with it, massaged it, worked on it. What follows is my first pass at articulating it. Is it complete? Absolutely not? Is it perfect? Far from it. But perhaps the act of writing my ideas down and sharing them will trigger your thoughts. Maybe it will start a conversation. Maybe the future starts right here.

Side note: If the sentiment in that paragraph above appeals to you, then you might love one of my favorite books of all-time. Before you think I’m going to recommend a 600 page tome for your weekend reading assignment, listen closely. The book is called simply, “What do you do with an idea?” I’ve shared this book on my blog in the past, used it as motivation when speaking at conferences, shared it at Mautic more than a few times, and I recommend it to everyone. If you don’t have a copy – buy one. Today. In print.

Examining software’s historical progression

Now, I always hesitate before sharing some of these thoughts because I fully recognize what is about to occur is a gross generalization of the fully history. And I also hesitate knowing the vast knowledge and personal experience many of my reader’s have in this space. Many who have knowledge far beyond my own. To quote another, “I speak as a fool” or at least suppose myself to do so. With that very strong word of caution, here is a rough generalization of a thread of continuity I can see occurring as we explore the historical progression of software development over time.


The Personal Computer Era

Things began on a computer, a single computer. Systems were stabilized, functions formed, and programs proliferated. All within the box of a single machine. Advancements were made to increase the CPU, the RAM, the motherboard, but all the software was created to live and run within the beige box sitting atop a desk in front of the user. This was single location software. 

But this environment did have a few benefits as well. In addition to being easy to update (usually a floppy disk with the latest version), the user had full control over their information and their data. Nothing left their computer unintentionally and very little left intentionally. This meant these single-location software systems were private. The user data was stored locally and used locally. This closed data system was by its very nature private.

There’s an additional benefit to single-location software. Usually (of course there are exceptions to the rule) this software is secure. Whether this comes from the environment as a forced, by-product the outcome is the same. Software in this stage was typically considered more secure. Hack attempts existed, but they took different and more complex forms with higher level of effort.

Three Word Summary: Single, Private, Secure


The Server Era

The next step in our software evolution saw the migration from single-location software based on a user-computer, to a many-location software based on a server, or server cluster. I realize we’ve taken a giant step forward, we’ve passed by the smaller step of single-location software in the days of the early internet. This was transitional phase (to use the evolutionary term). And although unlike evolution we still see very clear examples of this transitional phase living today, they are far-and-away the minority. As a result, I suggest this next step in modern software progression is the creation of a many-location software system.

Just as with the original, box, single-location software we started with; here to there are a few benefits and detriments which accompanied this shift. While the highlights are evident (faster processing time due to volume and access of high-end compute power, immediate global accessibility, instant updates, constant availability, etc…) I will focus on two other factors which represent shifts from the previous stage.

In juxtaposition to the single-location computer-based software from the early years, server-based software is at best considered to be semi-private. In most cases you might even argue this software is less than semi-private and inclined more to be semi-public. The user’s data is available to the user, but, owned by the software system. This is a major shift from previous. If the data is no longer the users’ then it is by definition no longer fully private.

Finally, in our current stage we are also seeing these many-location server-based software systems are considerably less secure. One need only look to the headlines within the last month to hear multiple stories of data breaches. As the software systems in this era continue to hoard data they exponentially increase the size of the target for malicious attacks on their software. Even though these companies attempt to provide constant fixes and updates and improved security, the bottom line is evident: This server software is semi-secure.

Three Word Summary: Many, Semi-Private, Semi-Secure


The Decentralized Era

Finally, this allows us now the opportunity to begin to explore the future. Currently we are living in the end of an era. We are watching the archaic SaaS dinosaurs of today’s data-driven economy falter. I would be so bold as to suggest we are on the cusp of an event. I believe we will soon see the software equivalent of the Cretaceous–Paleogene extinction event. This is a bold statement, but consider this: if we lived in the age of the dinosaurs, would we have seen it coming? More likely we would have scoffed in the face of such mass extinction! Who could imagine this destruction given the sheer size, considerable strength, and ultimate dominance of such magnificent creatures! (And yet here again, history appears to repeat itself.)

Two Paths

The exact path we take remains to be seen. However, I believe there are two potential paths, but both lead us to the same outcome. First, we may see, as in the age of the computer to the age of the server, a transitional step form to bridge the gap from here to the future. Or we may have some event dramatically shift the landscape overnight. I can’t say for certain which will occur but I’ll share my opinion as I alluded to earlier. I believe given the size, dominance, and control exerted by the existing server-based software companies who are enjoying life as is and don’t see the value of a further evolution (again, this can be identified based on what motivates or drives them – aka, how they make money) the only logical path involves a cataclysmic, seismic shift in the landscape and the economy.

Regardless of the exactitude of the path, I believe the outcome remains consistent. Based on our history as we’ve defined it below we can extrapolate what the idealized future might look like. In this case we’ll start with the three word summary and work backwards.

Three Word Summary: Many, Private, Secure

The next logical progression of modern software takes the best of every past iteration and era of software. This means we should expect to see a many-location software platform, both private and secure. And if that definition doesn’t immediately strike a familiar chord then I’d recommend reading more on the subject of the decentralized web. These are just a few of the core tenants of this software philosophy.

Many-location refers to an expanded and improved upon implementation of the current age of SaaS server-software. This is the natural next step in the following progression: single computer, single server, single-provider cloud, multiple-provider network.

Private refers to the location and storage of the data. This can be done separately and distinctly from the software provider. This point also includes a multitude of encryption possibilities, blockchain stored sovereign identities, and so much more.

Secure refers to the trustworthiness of the software, a decentralized software allows for trusted, verifiable software solutions. Smart contracts and immutable ledgers add an unprecedented layer of security to this decentralized software future.


To be continued…

I did it again, and I apologize. I didn’t mean to go this long and the hardest part is I’m at exactly halfway through the explanation of this theory. I believe there’s a second piece to the puzzle. An equally satisfying piece which fits perfectly into the picture and reinforces the original thesis statement. I hope this has intrigued you and caused you to think about what this future looks like.

What do you disagree with? What do you find compelling? Have I missed anything which might further substantiate this line of thinking? Let me know! I’ll post the next installment soon!

June 26, 2018
Pardon the Math, Polynomial Code Computing

I feel obligated to begin this post with something that I will rarely due. I’m issuing a disclaimer. What you are about to read is intense. But before you get all titillated thinking I’m about to post something scandalous and make you blush – don’t panic. I am not posting anything explicit. Rather, what follows is a deep dive into a topic I only recently learned about but am completely fascinated by. Okay, here’s your disclaimer:

Disclaimer: The post you are about to read contains math. And not your run-of-the-mill, basic, 1+1 arithmetic. We’ll dive in deep into some advanced concepts. Don’t let it scare you. Force your mind to think about the implications and expand your horizons.

The high-level concept

Computers and information systems today process information in a typical somewhat linear fashion. In the early days problems of speed and scale were solved by throwing more hardware at the problem.

This concept always brings to my mind the possibly mythical, certainly embellished, tales from the Google vaults. In the search engine’s time of growth explosion they found it was cheaper to merely add more servers into their data centers in new locations then to take the time to remove and replace the dead ones as they failed. 

Regardless the veracity of this seeming tall-tale, the underlying principle holds an element of truth. Everyone knows if your website is running slowly the first thing you do is add more RAM to the server (followed closely by increasing your number of CPU’s).  That’s your quick history correlation. Bottom line: Adding more machines was the solution for slow servers and delayed processing. 

This is yesterday’s solution applied to today’s problem. This is wrong thinking. There’s a better way, which brings me to the paper I’ve been studying and the research being done around the concept of polynomial coding as it applies to optimal designs in matrix multiplication. And finally, we get to the high level concept:

Rather than taking the historical approach of adding more machines to continue the functional processing of slow or lagging machines and still limiting the solution until all processes across all machines have been resolved, polynomial encoding creates a high-dimensional coded matrix to arrive at the solution in an optimized computational strategy where the minimum possible recovery threshold for the distributed matrix is determined to allow efficient decoding of the final output by the data requestor. 

The product code matrix approach

I recognize that last sentence is an abomination to the English language but this is a mathematics-based post and not a grammar dissertation so I humbly ask for your clemency. Let’s take a look at what this solution means in a diagram (you knew it wouldn’t be a math post without a diagram right?)

In this (terribly drawn) example I’ve sketched a 1D maximum distance separable (MDS) code on the left (where we have 3 workers computing the solution) and a single worker failure; and on the right we have a 9 worker matrix based on a √ N by √ N layout with a 4 worker failure (this second example is considered product code).

These matrices lead to the following equation for recovery threshold:

In essence we can see that the product code approach is a significant improvement over the 1D MDS exemplified above. But the question now becomes, is this optimal. Does it naturally follow that an increase in the number of workers improves the optimization of the computation? 

The researcher discovers a surprising fact and upon some rather ingenious applied mathematics comes to a very different conclusion. Qian Yu, a PhD student proposed and then wrote a paper sharing his theorem and proof for identifying optimum recovery thresholds.

Identifying optimum recovery thresholds

Through the use of polynomial codes Qian demonstrates the optimum recovery threshold can actually be achieved in as little as mn. Here is the main result from the paper he published:

For a general matrix multiplication task C = ATB using N workers, where each worker can store 1/m fraction of A and 1/n fraction of B, we propose polynomial codes that achieve the optimum recovery threshold.

He then determines polynomial codes only require a decoding complexity almost linear to the input size.

I will save you the work associated with proving this theory and will leave the fundamental mathematics associated with the polynomial matrices for your review of the original paper. The implications from this discovery are vast and far reaching. It would be a terrible understatement to suggest this be only a step-wise improvement in our computational processing abilities. This is an exponential, order-of-magnitude improvement.

The Practical Implications of polynomial coding

I’ll leave you to contemplate this original work on your own and will instead only highlight a few obvious implications from this revelation in our thinking around computational coding. In current technology our processing happens linearly. We scale things linearly. Through the introduction of polynomial code we can achieve optimal design in record time, because the result is not a simple linear scaled tied to N, number of workers.

The practical implications of this development can be seen in those computationally intense fields first (think machine learning, or artificial intelligence). Or consider also the fields where “big data” players have traditionally found strength by “increasing bandwidth” or in more proper terminology, increasing N (number of workers). As Qian has proven the introduction of polynomial code to the distributed matrix multiplication problem revolutionizes these industries and many more. I have no doubt these findings will have ripple effects though every aspect of the internet as we know it today.

I recognize the depth this post extends beyond what many will find time to review, but should you be interested, here’s the research paper addressing the topic. I encourage you to expand your mind and push your thinking to explore new concepts and move your horizons!

the best netflix open source project chaos

June 12, 2018
My Favorite Netflix Open Source Code

We’re kicking off Tech Tuesday with this post! I will probably post code discussion (I had a second post I wanted to share but is 2 just simply too many for one day?) on these days. I’ll also post some topics that are more studies of other technical projects, or as in today’s example, share a tech resource I find useful, instructive, or otherwise helpful. In this post we’re going to explore one of my favorite brands. Let’s examine Netflix as a brand and a company separate from the ubiquitous service they provide.

For the uninitiated, Netflix has 149 open source projects listed on Github. Clearly they believe in the philosophy of open source. It’s certainly exciting and refreshing whenever large organizations demonstrate their transparency by open sourcing their various tools. In my opinion this is a great example of a “rising tide raising all boats”…or to use another popular analogy “sending the elevator back down”.

Anyway, the difficulty of selecting a favorite project is dramatically increased by the sheer number of projects to choose from. In an attempt to give a fair representation first, I’ll share some general stats based on their existing projects statuses and then I’ll share my personal favorite. (Spoiler: my favorite is different than the general population.)

There’s a number of ways to explore popularity of projects on Github (where Netflix and millions of others store their open source code), but the main ones are forks and stars. I would go further to say their order of relative importance is also as I have listed them here. By this I mean, someone who has forked the code is more likely to be demonstrating an intent to do something with the code, while a starred repository may simply be a bookmark to reference later or merely to “favorite” the open source code. Regardless, I think it’s not a bad idea to look at both of these metrics in regards to Netflix’s repositories (open source projects) and by this get a feel for which projects are considered the most popular by the open source world.

As a bit of background, I did some digging to begin with to ensure I was looking at the best way to gather this data, because I certainly wasn’t going to attempt to build a list of stars and forks from their main repository page by hand…I’m a programmer by heart, so I’m lazy (although others consider this brilliance). As a result of my Google-fu and my somewhat lacking Github website knowledge, I finally came across the pages I link to below which made my job ridiculously easy.

Forks:

https://github.com/search?o=desc&q=user%3Anetflix&s=forks&type=Repositories

Based on this filtering here are Netflix’s 5 most forked repositories.

  1. Netflix/Hystrix (2,814): Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services
  2. Netflix/eureka (1,361): AWS Service registry for resilient mid-tier load balancing and failover.
  3. Netflix/zuul (1,044): Zuul is a gateway service that provides dynamic routing, monitoring, resiliency, security, and more.
  4. Netflix/SimianArmy (929): Tools for keeping your cloud operating in top form. Chaos Monkey is a resiliency tool that helps applications tolerate random instance failures.
  5. Netflix/ribbon (517): Ribbon is a Inter Process Communication (remote procedure calls) library with built in software load balancers. The primary usage model involves REST calls with various serialization scheme support.

Stars:

https://github.com/search?o=desc&q=user%3Anetflix&s=stars&type=Repositories

Based on this filtering here are Netflix’s 5 most starred repositories.

  1. Netflix/Hystrix (13,920): Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services
  2. Netflix/falcor (8,814): A JavaScript library for efficient data fetching
  3. Netflix/SimianArmy (6,544): Tools for keeping your cloud operating in top form. Chaos Monkey is a resiliency tool that helps applications tolerate random instance failures.
  4. Netflix/eureka (5,570): AWS Service registry for resilient mid-tier load balancing and failover.
  5. Netflix/zuul (5,323): Zuul is a gateway service that provides dynamic routing, monitoring, resiliency, security, and more.

As you can see from the above, the two lists are remarkably similar, and yet they aren’t identical. I’ll leave the debate and resulting inferences for these deviation as an exercise for you. Now, I’ll share with you my personal favorite open source project from Netflix.

My all-time favorite has been around a little while and most recently is also bundled in one of the repositories above in addition to being a standalone project.

Netflix/chaosmonkey: Chaos Monkey is a resiliency tool that helps applications tolerate random instance failures.

To understand why this is such an incredibly brilliant repository and something which demonstrates the sheer genius of the Netflix operations team, you should read this article: Netflix Chaos Monkey Upgraded – Netflix TechBlog – Medium. Here’s a highlighted quote from this post:

“We created Chaos Monkey to randomly choose servers in our production environment and turn them off during business hours.”

…I can’t even begin to explain how cool this is from a programmer’s standpoint. (“Cool” after the utterly terrifying part has been resolved and there’s a sense of confidence in the infrastructure and codebase.) Basically, the name comes from the idea of unleashing a wild monkey in the Netflix data centers to randomly rip apart instances and destroy connections — all while Netflix continues serving customers without interruption.

Now, to be fair, the Simian Army repository above is the evolution of this concept, as in this project they have also included the Latency Monkey, Conformity Monkey, Doctor Monkey, Janitor Monkey, Security Monkey, 10-18 Monkey, and finally the upgraded Chaos Gorilla.

If you’re interested you can find a write-up of each of these simians on the Netflix Tech Blog on Medium (one of my all-time favorite Medium blogs to read…voraciously).

Because their desire is to create a resilient, faultless environment and they are willing to subject their production environments, in real-time, under load, to these types of random chaos tests, this is by far my favorite open source project from Netflix. And then they made it open source so everyone can benefit. This improves (or should improve) the code quality and infrastructure of every major company, product, and team working on the internet.

Marketing automation and microservices vs monolith

June 9, 2018
Tech Tuesday: History Repeats Itself

I’m certainly not suggesting anything about your age if you know specifically what machine is shown in the above picture, but I suspect there are a few of you reading this post who know exactly what this was and what it represented in tech history. If you are one of those lucky few, I ask your forgiveness for any potential errancies in the post that follows or any assumptions made which are not entirely accurate.

These Tuesday posts are my chance to highlight technology. Usually they take on a more technical form and discuss topics on a more programmatic or procedural basis. (In fact, some of my posts have  been labeled downright boring as a result of the amount of math involved.) I hope this Tech Tuesday post will take a rather complex topic and make it slightly less technical. Before I get into how the picture above applies to technology today and what I consider a highly relevant topic, let me tell you a short story.

Once upon a time…

Once up on a time a group of incredibly bright mathematicians and early computer programmers got together to discuss a problem. Rather, they wanted to explore the possibility of making some of their theories a reality. You see, there was a project at MIT called Project MAC (no relation) and a few engineers from a company called General Electric and Bell Labs got together and began to talk. They were excited to see about the potential of building a super computer. A massive undertaking capable of solving all manner of problems and storing data. This new system was called Multics and was an operating system designed to handle complex situations, dynamic linking, procedural calls, and live part-swapping. Multics even supported multiple processors (rare in this day and age).

The list of features found in Multics continued to grow and expand and as you can no doubt begin to tell pointed to the scope and magnitude of this operating system. It was grand and magnificent and all-inclusive. But that’s where things began to become a problem.

Multics vs. Unix

As the Multics operating system grew and expanded it became larger and more monolithic in its framework. It added functionality and features for dozens of different applications. Around this same time a new operating system began to take shape as well, one entitled Unix. This OS was simpler. Still powerful and in fact in many cases derived a great number of features and functionalities from the discoveries and work done on Multics. But Unix did something very different.

Rather than creating an operating system that contained all the features in a single package, Unix was built with the concept of a package manager. The ability for an engineer (or systems operator) to selectively add the packages and features desired for their unique application. In this way the power of Unix was delivered in smaller discrete packages and distributed independently rather than as a single all-inclusive package.

And as you are probably very well aware, Unix exploded in growth. Not only Unix, but Linux, MacOS, and even indirectly Windows NT all came about as operating systems offering different features and appealing to different audiences. But Multics? Well, as you may surmise, Multics slowly disappeared from use. the shortcomings of the monolithic all-inclusive platform giving way to the lightweight microservice approach of it’s successors.

Monolith vs. Microservices

This leads me to my thought for today and the the associated title of this post, History Repeats Itself. You see, what we have come to see in many modern software packages or SaaS products is this same concept that a singular monolithic platform is somehow superior. There’s the misconception that a sole all-inclusive product must provide a better experience because it “does it all”. But in this way I am reminded of a quote:

History repeats itself, but in such cunning disguise that we never detect the resemblance until the damage is done.
– Sydney J. Harris

Just because we are now calling this solution SaaS (it’s the latest and greatest iteration of software delivery systems) it must be superior in all ways to anything else. And just like that, we have taken for granted this suggestion because it’s wrapped in such a clever disguise. And if we don’t recognize the truth we will repeat our past.

Instead, we can be smarter, we can demonstrate wisdom and we can keep the damage from occurring by simply stripping away the disguise and recognize the similarities of our situation. 

Modern Microservices

If we are to learn from our past and create a brighter future we should begin now to push the limits of what our software systems do. I refer in this case to microservices. We’ve discussed this previously here in recent posts. And though the concept might be intimidating at first glance (new things often are) the results are powerful and forward-focused. We can create lightweight, fast, and powerful software systems that take advantage of what we’ve learned in the past, both the earliest operating system achievements as well as the recent learnings from SaaS solutions.

And this is an interesting point that should be mentioned. One of the most well-known voices in today’s software development has written some fascinating articles on this subject, Martin Fowler, one of the personal guides in my thinking has said it like this:

A more common approach is to start with a monolith and gradually peel off microservices at the edges. Such an approach can leave a substantial monolith at the heart of the microservices architecture, but with most new development occurring in the microservices while the monolith is relatively quiescent.
– Martin Fowler

He goes on to make a statement which at first glance may make some very concerned and even sad, but I think it’s important to realize there is an end that is better for everyone; the product, the community, and the people using the software. There is a purpose. 

Another common approach is to just replace the monolith entirely. Few people look at this as an approach to be proud of, yet there are advantages to building a monolith as a SacrificialArchitecture. Don’t be afraid of building a monolith that you will discard, particularly if a monolith can get you to market quickly.
– Martin Fowler

This resonated with me deeply. This is how we have begun developing things at Mautic. We have created a strong foundational platform, we’ve identified what works and what doesn’t and we’ve created a codebase tested and constantly improved structurally. Now as we look ahead at Mautic 3 we can be proud of Mautic 2, how it helped us arrive at the point where we are today and how we can go boldly forward into tomorrow

Mautic isn’t perfect. I’m not sure it ever will be. But we have been following a plan, a process by which we can continue to improve and dominate the MarTech space. We have set a course for success and we have determined to become progressively better each day, each commit, each release. I hope this helps others see the path we have set, the reason why I believe we will be incredibly successful, and offers to all the assurance that this is a course we have crafted with forethought and purpose. 

Choosing the best marketing tools speed and journey

May 21, 2018
Transportation Evolution

Wow, we have certainly progressed quite a long way in our mobility and mechanism by which we get from point A to point B. What a wide variety of methods and each one seemingly more advanced, and more technologically improved than the last. What a testament to our achievements as a human race and our ability to create and to innovate! But wait...

Recently this past weekend I was standing at my open balcony doors (the weather was truly wonderful) and I was admiring the sunset and the beauty of the river and road below when I was struck by a most interesting thought. I want to try and share it with you, so forgive me if something is lost along the way or I don’t make perfect sense. I hope the thought will be conveyed.

Snapshot

Here’s what I saw as I gazed out into the world. (And yes, I think it’s an unusual occurrence and one I haven’t actually witnessed before.) On the river, were several people in kayaks, a rowing team out for an evening practice run. Joggers on the running path around the park, several cyclists in full gear pedaling along the edge in tight single line formation, a handful of cars waiting to turn at the intersection while a motorcycle sped through the exchange, pedestrians pushing strollers, the MBTA (“T”) Orange line rumbling northbound, while the purple commuter rail clattered past at twice the speed, and a airplane droned overhead as it pulled away from Boston Logan International airport headed for some unknown distant destination.

It sounds crushing, and a little chaotic, but this is not the picture I want to paint for you. Yes, there was certainly a lot going on, but the noise was not unbearable, the scene not one of pandemonium. As a matter of fact everything moved seamlessly and with a sense of elegant precision.

Progress

What I really hope you see in this microcosm is something truly phenomenal. Here captured in within my gaze was a snapshot of the evolution of transportation over the past 200 years. Did you catch them all? I’m sure you did as it was quite the overwhelming paragraph. We had everything from walkers, runners, bicyclists, motorcycles, cars, boats, trains, and airplanes.

Insight

My immediate first thought was as I’m sure yours might be too — wow, we have certainly progressed quite a long way in our mobility and mechanism by which we get from point A to point B. What a wide variety of methods and each one seemingly more advanced, and more technologically improved than the last. What a testament to our achievements as a human race and our ability to create and to innovate!

But wait, as I said this was my immediate thought. And it was after this thought that the truly interesting idea began to form. We have all these advancements, the ability to travel literally around the world. And we have an incredible opportunity to not only travel from point A to point B but to do so swiftly. And yet, we don’t travel via airplane everywhere. Clearly airplanes are the fastest means of transportation (in the scene I described earlier). But it’s not the most practical. Similarly, we don’t necessarily always jump on a train, or into a car for every jaunt outside. There’s a reason for this. We use the most practical method for the journey.

Each mode of transportation has different benefits and different reasons which make it an acceptable (and still widely used) method for moving from where you are to where you want to be.

Specifically, you’re not going to hop in an airplane to get from your house to the local grocery store (any more than you would get in a car to go through the park). Or to put it a different way, the time it takes to lace up your skates versus the time it takes to just walk from your front door to the mailbox might make the skates equally impractical. It’s not always about speed in the context of the vehicle, but in the context of the situation.

Application

I apologize for this part of the post but this is something I can’t seem to stop myself from doing: applying these ideas to other areas of life. In fact, I think this is partly due to my instinct to focus on core principles.

Marketing software is evolving at an increasingly rapid pace. The space is growing in complexity and advancing in technology all the time. But speed isn’t everything and the latest technological improvements aren’t always the right choice. Instead, just as with our chosen method of transit — we should use the most practical method for the specific journey.

So the next time you’re evaluating a marketing platform, or a marketing tool, make sure you’re considering the journey you’re on and the most practical way in which you should get there.


Mautic 3 provides marketing microservices through headless marketing automation api first.

May 16, 2018
Marketing Automation Microservices

Recently I was on the phone with a good friend of mine, he’s not directly involved in a technology sector and as such it makes our conversations incredibly fun, light-hearted, and many times /not focused/ on the highly technical discussion and debates I normally find myself sucked in to. This particular chat however ended up steering into my work and some of my recent blog articles and he made a comment that caught my attention. He said, “Hey man, at some point can you explain what exactly are you talking about with Mautic 3 and this new version you’re constantly getting excited about?”

I’ve written a good deal lately about Mautic 3, from my initial thoughts on the subject, a business benefits piece, to a pretty technical introspective, and even a timeline for how I think it might unfold (yes, it’s aggressive). Being a good friend he had read all these articles, and this meant he knew what I was talking about and what I was doing, but he didn’t necessarily have a strong understanding of what it meant and what it actually would do. What he was asking was a very good question and exactly what I like to hear.

I get great advice from lots of people, but some of the best advice comes in the form of a question, and comes from those that are not too close to the situation. Those questions are the best for me. They help me to re-focus, or maybe to state it better, they help me to step back and see the forest, not just the trees.

The Marketing Forest

I hope this post will be a less technical and better view of the marketing automation forest as I see it. First, I think this is an extremely important point to not overlook. Maybe you don’t call it a forest, necessarily, maybe you prefer to call it a ‘landscape’.

I need to take a quick moment to tip my hat to the incredible work done by Scott Brinker (@ChiefMartech) and his team creating the marketing landscape each year. If you haven’t taken a moment to appreciate it – do it).

Regardless of what you call the space, it’s overwhelming, and as Scott suggested in a recent blog post the space is only going to continue to grow. There will not be a mass consolidation of marketing tools but instead a proliferation as more and more are introduced. This leads Scott down an interesting line of questioning and thinking. I call it interesting because he begins to touch on the very thing I have been speaking and writing about . I’ll touch more on that in a second, but first, let’s talk about the implications of such an expansive (and constantly expanding) marketing space.

Expansion means competition

What we see occurring in the marketing space is not uncommon nor should it be something to be afraid of. Instead, the increasing number of companies entering this market improves the customer experience. As more and more services are offered the customer will (hopefully) find a better and better solution to their problems. At least that’s the idea. If businesses happen to overlap, competition comes into play and the product will improve. (also there is the side effect of potentially lowered prices as well!) .

I’m always a fan of competition, I believe it has been well proven that such competition results in a better environment and experience for everyone. This also encourages companies to be better and better.

There’s a second outcome I see as a result of this massive and somewhat exponential growth. As Scott suggested, and as I’ve talked about many times previously – with so many options and companies available in this space there becomes a greater problem to be solved, a greater need to be met. This is where Mautic is uniquely and (dare I say it) perfectly poised to meet the need.

Marketing Disconnected

I recently read an article published by KPCB recently which shared the number of marketing tools that a single enterprise business uses. It’s mind-blowing. Care to take a guess? If you guessed 10-15, you’re off by a mile. If you thought 25-50 you’re getting closer (and by closer I mean halfway to right). The number of different marketing technology services, platforms, or products that an enterprise uses is nearing 100 unique systems. This is the product of an 8,000 tree “marketing forest”. But while some may see a problem; I see an opportunity – a massive opportunity.

I see an opportunity of epic proportion that only an open source, agile, API-driven marketing automation platform can attain. You see, a proliferation of tools means there needs to be some manner for communication between them, some exchange platform for the data to be shared, and other advanced data transformation to be performed.

What this marketing disconnect needs is a connector. Something that can seamlessly integrate with all those tools, fluidly fill the gaps between them, complement them, and improve the marketer’s experience. But this shouldn’t be another app with a fancy UI. Even more importantly this can’t be another platform seeking to be the “data holder”. The one place where all data must be kept (i.e single source of truth)

Side Note: This is a point worth more consideration. Almost without exception every existing platform seeks to be the source of truth. They believe only by owning the data are they able to “win” the competition to be best. Therefore, everything they do is to protect and extend this perceived trophy.

Big Data Is Not the Answer

This point is a big one. Many businesses today focus on big data (or at least they used to). What do I mean by big data? I’m glad you asked. Big data means collecting as much information on as many people as possible. Once all that data is held the theory is that predictive analysis and data scientists can extrapolate potential results and thus make smarter marketing decisions. But there is a shift in the tide and this commonly held belief is wavering.

Interesting Read: If you’d like to learn more about what causes this change in thinking, I’d suggest reading this article: Mastering the game of Go without human knowledge.

If the collection of more and more data is not the answer, what is? What is the solution that makes marketers more successful and handles the overabundance of different and disparate tools currently existing in the marketing forest? Enter marketing microservices.

Marketing Microservices

I realize there are some that fully understand what a microservice is and what value it offers. For those I apologize if I make it sound too simple or oversimplify the technical nature of the definition. My goal is to summarize in such a way that everyone feels comfortable talking about marketing microservices.

Personally, I’ve always seemed to learn best with examples and clear instructions. The simpler the better. (The popular subreddit: explainlikeimfive is a personal favorite of mine as you might guess.) And so I’ve picked a hypothetical marketing microservice to be used. And as you might imagine this is something Mautic is preparing for part of M3.

A Marketing Microservice Example

Almost every marketer I know and every system of record (you know, the place that wants to be the source of truth we explored earlier) has a common dilemma. In fact, marketing automation platforms in particular struggle with this issue on a daily basis. The problem is recognized but the problem is not easily solved. Curious? For our example, we’re going to assume the issue is contact record de-duping. The ability to recognize and remove (or merge) duplicate contacts in a database.

This is a problem everyone wants to solve but everyone takes a slightly different approach and everyone has found equally varying levels of success. A marketing microservice would allow a marketer to send contacts to a headless, marketing automation microservice provided by Mautic, which would de-dupe the records and return the result. Everyone wins.

The result is a marketer with a cleaned database of contacts, existing platforms don’t have to worry that another tool is “fighting” to be the “source of truth” and Mautic has provided a valuable microservice to fill-in a gap. Once again we have the idea of filling in the gaps. A clear opportunity for a fluid connecting of marketing microservices providing highly relevant, extremely efficient, valuable, results.

Side note: Interesting side effect, when the data storage is irrelevant the marketer is empowered to do things better, without worrying about switching costs, data privacy concerns, and which platform is /the/ platform for their data. This changes everything.

And this is only one very simple example of the power and capabilities of a headless marketing automation platform. Don’t let the slightly unusual terminology throw you off – it’s only a technical way of saying, a platform which fits in with other existing tools seamlessly and painlessly, complementing, strengthening, and simplifying existing marketing stacks and allowing the marketer complete control over where and how their data is stored, manipulated, and displayed. But that’s a lot of words! So I shortened it to headless marketing automation.


I hope this has helped to showcase what a marketing microservice is and how it can completely revolutionize the industry. All of the incredible power of marketing automation where and when its needed. No more data security and storage concerns. An improved experience for marketers. Finally, a solid, robust way for 100 different and disparate systems to begin effectively talking to one another and improving each other. This is the type of thing Mautic 3 is prepared to handle. This is the opportunity Mautic, an open source marketing automation platform, is uniquely able to address.

If you haven’t taken a look at Mautic, I suggest you do so now! Maybe after reading this you have some great ideas for other ways marketing microservices can add value in the overwhelming marketing landscape. Join in the discussions being held, add your voice to the Mautic 3 development process and become a part of something bigger than yourself, something that will truly improve the lives of marketers everywhere, and change the way the landscape is viewed.

May 13, 2018
Saelos Sunday Update 2

Here we are on the weekend again; and I love the weekends. Partly due to the fact that the schedule tends to be more relaxed, and partly because this means I get to spend some time on Saelos! If you’re just getting to my blog then you should do some reading on other posts before continuing, check out the Saelos announcement, a peek at the technical advantagethe previous update, and then let’s talk about what’s coming next.

Today I want to do two things, first give everyone a quick update on the status of Saelos today (hey, this is an update post) and second, I want to share a little bit about a benefit that Saelos provides and why I believe it’s so incredibly important in the world today.

Saelos Growing Organically

Okay, so first is an update on where Saelos is today. Things continue to progress at a pretty good rate. I wrote last week about a Saturday project I undertook building a dashboard for watching a GitHub repository and monitoring growth. I applied it to Saelos to get an idea how things were looking. Here’s the screenshot:

Micro Services CRM and the future of customer relationship management

Here’s the lowdown from that snapshot. First, Saelos downloads are exploding! The growth is tremendous with over 250% increase from the previous release. Second, 2/3 of the issues have been addressed already in the last release and working through the rest as quickly as possible. Lastly, Saelos has a total of 50 stargazers and 9 forks already. This is ridiculously fast uptake and I’m very excited to see this kind of growth continue. Oh, and by the way, the downturn in that one release was because we pushed the next one so quickly there was no time for the Beta 3 to be adopted!

Monopoly Should Only Be A Board Game

All this growth is exciting and the fact that, yes, there is a real need and desire for a super strong open source CRM. Honestly, just a super strong CRM (open source just happens to be the best mechanism in the world). But not a run-of-the-mill standard customer relationship management software that only functions in a small business environment. (Don’t get me wrong Saelos works brilliantly for SMB.) No, what everyone really needs is an option.

There’s nothing worse than a monopoly in a market. And let’s be honest with each other. Currently there’s a bit of a monopoly in the CRM market. And this has lead to stagnation, lack of innovation, and an overwhelming sense of despair. Why? Because when you’re an 800lb gorilla (or maybe that’s a bear named “Codey”…) you keep iterating on the same outdated mentality and philosophy and grow by acquisition alone. The result is frustration, despair, and heartache for businesses and sales teams everywhere.

So how does a cartel get overthrown? By no longer buying their product. But let’s be more realistic. It’s not quite that simple because businesses need a CRM. They need some system to manage their customers and those relationships (not to mention potential customers). And so simply walking away from a software platform is not an answer, something must fill the gap. Enterprise businesses should have options, reliable, capable options which can function at scale. This transition isn’t going to happen overnight. Rather, I believe, the best method for opening the floodgates for businesses is to create an alternative that offers immediate successes. Consider small wins. Super small wins. In fact, maybe we call them micro wins.

CRM Micro Services

Micro Services CRM and the future of customer relationship management

Let me be the first to say I dislike talking about other software because what I prefer to do is talk about our software and why Saelos is different. And why Saelos is better and ways in which Saelos stands apart, and functions differently than other systems. And so let’s talk about Saelos and the future of the customer relationship management space. I hinted at my ideas at the end of the last paragraph and in the title to this one.

As technology has advanced we’ve ridden the waves of box software, to hosted software, to software as a service. Even the irony of a ‘no software’ software company is hard to overlook. But as we continue to move forward in technology and software we see the landscape continue to change.

If we step out of CRM and look at technology in general we see the shift from hosted software, to containerization, and then server-less software (or functions-as-a-service).

But specific vertical markets like CRM have not made these same advancements, in part because monolithic software companies have found massive profits in their markets and have not been interested in pushing the limits of technology (or even keeping up-to-date with those technology advancements). There’s a second reason why some of these same improvements haven’t been applied to these markets and that’s due to the inability of closed source software to be able to fully capitalize on these changes.

This is why open source software has a particular advantage for businesses. Run your own containers, run your own server-less infrastructure, or even your own function-as-a-service with open source software.

I believe Saelos as an open source server-less CRM gives rise to the future of customer relationships management. I believe we will (and should) see a proliferation of micro-services in the field of CRM. Use the tool that’s right for the job, pick and choose the best parts of each platform to make your business successful. And in this way, Saelos brings immediate success and improvements to businesses that have existing CRM systems already.

And if by chance a business or organization does not yet have a good CRM in place, or is ready for a complete overhaul and change to their current system, than Saelos provides an amazing platform upon which to build. That’s the true beauty of Saelos, use the frontend, cutting-edge UI, use the advanced API backend, or simply use the functions you need. Saelos works perfect in each of those settings.

I am really excited about the future of Saelos and all it has to offer. If you haven’t yet taken it for a spin, I suggest you look for yourself and see what the future holds.


Staying Connected

The last thing I’ll leave you with for this quick update post is a very simple and easy call-to-action. If you like this content and want to be kept in the loop regarding all things Saelos then you need to fill out this short form and you’ll get an update newsletter direct in your inbox each time once is created.

Saelos Newsletter Signup

That’s it. I’ll do my best to keep the Saelos Slack channels updated as well as the newsletter and if you are following my blog here, you’ll also get updated whenever I post something here too.

Mautic 3 Proposed Timeline Marketing Automation Future

May 11, 2018
Mautic 3 Proposed Timeline

The next major topic everyone is very interested in relating to Mautic 3 is the proposed timeline for when things will be worked on…and maybe more importantly when they’ll be available to user. I totally understand this desire and want to do my best to answer this question but I truly hope that everyone understands this is not a black-and-white topic and something that can be easily answered. Why? I’ll answer that quickly and with two words: because developers. I say that in jest but the reality is not too far off from that joke.

In an open source community the release of new versions of Mautic are completely and totally reliant on the time and attention of the volunteers in the community. This is a massive strength for us because we have such a large number of volunteers, but can quickly become an Achilles’ heel when it comes to timeframes. Volunteer work as they are able to. This means while I will propose a series of steps below for the Mautic 3 timeline I will not attach highly specific deadlines or timeframe (at this stage).

Now in the future as we begin to move through this process and as we begin to accomplish certain milestones or goals we will have a better understanding of how things are flowing and can at that time begin to establish some rough timeframes for completion.

With this disclaimer in place let’s take a look at the various steps in a Mautic 3 release timeline and what is involved with each step.

Discussion Phase

This is where we’ve been living. This is the active ongoing discussion that has occurred in the core channel of our Slack group and if you haven’t been involved in that process, I recommend logging in and sharing your thoughts. This phase is anticipated to have controversy, differences of opinions, and different strategies proposed for how everything comes together.

I’ve written a fair bit lately on this topic as we discuss different options. Starting with a discussion about what Mautic 3 might look like, to the technological advances Mautic 3 might achieve, to the business benefits created by Mautic 3.

The desired outcome from this phase is a shared understanding and an agreed upon vision for what we want to accomplish as a community. And I would merely suggest compromise is important to keep in mind as we all work together for the good of the whole (I’m speaking that admonition to myself as much as anyone else.)

Team Formation

The next phase after the discussion phase is a team formation. This shouldn’t take very long but there will be a time period where we want to evaluate who is involved in this team. Anyone in the community can be involved but there are certain traits which will provide greater value to the team. Things such as a strong ability to see solutions in addition to problems. We want problem finders, but not without being solution finders too.

Side note: Problem finders are critically important to our success but if there are only problem finders are also /critically detrimental/ to our success. We must have problem solvers.

Secondly, having technical abilities and interests are vital to this team as well. I’ll try not to make this sound too obvious, but without developers we can’t create Mautic 3. I’ll be the first to tell you, I’m not writing this one on my own!

Consensus on Course

I know it sounds silly to appear that we’re returning to the discussion phase but that is not the idea behind this step. In this step we are taking the outcomes from the discussion and beginning to outline how we (as a team) tackle those challenges and begin development. We also identify what is possible and not possible to complete in a timely manner.

Did you catch that? This is the point where we begin to discuss overall timing for successful release of Mautic 3.

We discuss where we want to go, what resources we have, and what is a reasonable time frame to get there. This is what I mean by consensus on course. The direction is set previously; here we focus on timing and specifics.

Core Areas And Distribution Of Tasks

Next the team begins to identify those specific items that each developer or couple of developers is interested in focusing on. I think this is a particularly important phase because we will make the most progress and find the greatest success when everyone is working on something they love. If you feel passionately about a particular area you will put everything you can into it, and will be able to take incredible pride in the end result. And you’ll know that the end result is something that has been done well. Because you care about it.

I am driven by this mentality on seeing others doing what they care about because I do what I love every day. I am committed to seeing others in our community free to focus on the things they are passionate about as well. Do what you love or move on to something else. This isn’t a duty, Mautic isn’t a chore, it’s an opportunity. Yes, at times Mautic may be a /challenge/ but that only makes the outcome better.

Key things to be examined at this stage will be specific areas and leaders for each: Every functional and foundational part of Mautic will need to be addressed. Examples: Campaigns, Segments, Contacts, Companies, Email Builder, Landing Page Builder, Messaging and Channels, Plugins, etc… Let me be clear that is not an exhaustive list. Not by any stretch.

Technology Proof of Concept

Once all the areas have been identified and work is clearly defined the next step takes place rather quickly. In my opinion this is a key validation step in the entire process. The idea of a proof of concept is focused on creating a representative example of the final product.

The goal of a proof of concept should be to confirm the path and technologies chosen to be implemented or clearly identify the ways the current approach is wrong and what should be done instead. That last sentence is super important. It’s more than just showing something doesn’t work. In the case where there is a misalignment of expectation and outcome, an alternative approach should be identified. (Remember earlier? It’s not just problem finding, it’s problem solving)

Once as a core team we are able to evaluate whether the proof of concept has given us the necessary results we can move on to the next step. Keep in mind that each major component must meet a minimum level of expected result for the progress to continue.

Go Go Go

This is the exciting phase. This is where everyone is turned loose to start creating Mautic 3 code. We have a direction, we have a plan, we have a solid proof of concept and we are prepared to create the future.

As we create new things it is critically important that we include testing at every step. This something that was not done as effectively as well as it should have been done during Mautic 1 and even Mautic 2. I can only imagine there was a collective groan emitted by everyone when reading this part. Writing the unit tests and functional tests associated with new code is only interesting to a very select few. (I hold massive respect for those who find pleasure and personal fulfillment in creating these test processes and procedures.)

This phase is also where collaboration is important. Without proper collaboration we will find ourselves working in silo’s too much. We can’t work without collaboration and sharing of information. Do not let the importance of this collaboration be lost as we look at the next phase below.

Silo Alpha Testing

Because we will be creating tests as we build new software we should be able to test our code as we go as well. I’m referring to this as silo testing because it can be done within each functional unit without having to be applied to the greater product at the same time. Again, an API driven micro-services marketing automation platform gives us the ability to do this silo’ed testing

During this stage we will also be refining and modifying this code as we go either to make sure it functions optimally or because we have seen additional improvements we can make as we create Mautic 3.

Bringing It All Together

Everyone gets excited at this particular step. Here we are bringing each of the individual pieces together and begin to evaluate what Mautic 3 looks like. This community core team gets the first sneak peek at what Mautic 3 will present to the world. Yes, this will be an exciting day.

As part of the process of bringing all the pieces together we will repeat some of the steps we undertook during the Silo Testing phase above. We will again evaluate and refine the product based on the interactions between the various parts and identify ways in which the whole of Mautic 3 can be improved to be more than just the sum of the parts.

Important: This should not yield any massive surprises to the team because it is understood that communication and collaboration has been occurring frequently through each of the previous stages.

Alpha Release Deployment

This is the first of several celebration stages! Here the core team wraps up and presents to the community the alpha version of the brand new Mautic 3. This is a milestone moment not just for the core team, and not just for the Mautic product, but for the Mautic community and the world of marketing automation at large.

The Alpha release is the first fully packaged version of the final Mautic 3 product. It will not be without issues. Did you read that? If you’re following the process of Mautic 3 development and you’re not part of the community core team creating this product it can be easy to miss everything that has gone into this process. And it can be easy to point to problems. May I encourage you to exercise discernment and caution as you do so. Feedback is of course welcomed and encouraged. But everyone should maintain the proper perspective and understanding of the status of Mautic 3 at this point. This is an Alpha release with known issues to be found. Do not use in production.


Recap

So, if you’re skimming through this article looking to find specific dates I’m sure you’re disappointed. But you shouldn’t be. Instead let me encourage you to scroll back up and read through the points with a bit more intentionality. Then you’ll understand why the dates are not listed. It will not be until we have reached Consensus on Course that we will have a better understanding and a first attempt to outline specific dates.

Let me reassure you, when we get to that phase, we will absolutely and unequivocally share some preliminary dates and deadlines. Without a clear goal we will meander without enough of a sense of urgency.

Now, if you’re still reading and want just a ballpark idea of dates, the following is my opinion on dates and relevant release points.

  • Discussion: May 15
  • Team Formation: June 1
  • Consensus on Course: June 7
  • Core Focus: June 15
  • Proof of Concept: July 15
  • Go: September 30
  • Silo Testing: October 7
  • Alpha Release: October 30

Disclaimer: This is my personal opinion only and is not a finalized roadmap. If anyone attempts to quote these dates as “official” you’ll be immediately and unequivocally corrected!

Please also notice I am not showing beyond the Alpha, as we get this far into the future it becomes more and more difficult to identify deadlines and milestone dates. I have ideas and goals in my own opinion which I think would make for an amazing 2018 but will not share those with you yet. I believe as we move along these steps we will be able to gain more clarity into what is possible and along that path I will feel more comfortable to share specifics on other areas of Mautic 3.

I hope this helps give you greater visibility and understanding into what I believe would give us an incredible future and the path I believe would help us get their. Don’t be disillusioned this will not be easy; but I am confident that the rewards we will reap will be well worth every day spent, and every problem tackled. I hope you’ll agree and you’ll join me as we create the future.

business benefits of mautic 3

May 10, 2018
The Business Benefits of Mautic 3

Longer posts are always an undertaking. It involves a lot of thought and attention to detail to make sure the information I share is clear and easy to understand. There’s a lot of work that goes in behind the scenes for a finished post to come across as simple. So, usually before undertaking a post like this I like to take a deep breath, a big stretch, and smile. Something about staying positive before wading into the deep end once again.

I should say also, the smile comes from the amazing volunteers in this outstanding community. Together we are building an amazing platform that stands out from the crowd and differentiates us in a number of critical ways.

One of the biggest ways we can demonstrate our expertise lies in our ability to push the envelope and advance new thinking in the landscape of marketing automation. Yes it may be a daunting task, but I would suggest that our community, and our open source approach to marketing automation is the only way this level of growth can be achieved in a stable fashion. We have an opportunity that no one else has. And it’s our duty to push this space forward. It’s our obligation to change the world of marketing and give everyone the tools to market effectively using the latest in technology.

With that quick bit of motivation to get us started, let me dive into the business benefits of Mautic 3. I’ll follow that up with a proposed timeline for this product improvement as a separate post.

Business Benefits

Let’s start by exploring the business benefits by making this shift to the next major release of Mautic software. I want to highlight that these six benefits I’ll suggest are not necessarily distinct only to a Mautic 3 release. However, in each case I believe the greatest growth and leap forward would be realized in a major version release.

Stay with me to the end; I’ll discuss in a little more detail tomorrow the idea of migrations and how those would and should be handled with a transition like this.

Speed (Should I say Agility)
At the risk of sounding extremely cheesy I noticed as I typed up these benefits that many of them ended in the same suffix and sounded like they fit together. The marketer in me could not help but then adjust the one outlier (Speed) to also fit this same mnemonic. Thus, for point 1, we have agility, or speed, as an improvement that the end user would experience and thus become our first business benefit. Faster is better (in almost everything – yes, I realize there are exceptions to every rule). Moving to Mautic 3 for our platform will enable us to completely gut parts of the system that have experienced significant slowdowns in the past. Those areas where we see bottlenecks in processing times can be alleviated and the problems with overall system performance can be improved.

Of course we could see improvements in the current branch (Mautic 2), however, due to the discussion that has been held it has become evident that the greatest improvement to speed can be done with a re-write of several areas of the platform. This begs the question – if we must re-write core parts of the platform regardless of the branch in order to improve our speeds, should we not make the necessary improvements to improve speed everywhere and better structure our underlying architecture for the future at the same time? The argument is an easy one to make, in particular when considering our existing infrastructure is reaching end of life for support and we have fallen significantly behind the latest current release (by multiple versions). This must be addressed.

An improvement in our overall speed continues our competitive advantage as we already enable businesses to complete processes in a faster manner than other marketing automation platforms today. This furthers our lead in this area.

Flexibility
The second area where a Mautic rewrite gives us a business benefit lies in the flexibility of the platform. We already built Mautic in an open source way that allows businesses to create a marketing automation workflow that suits their unique business needs but that was always step one in a multi-step plan. The next step in that journey involves giving the business even more flexibility to manipulate and control their data.

Mautic 3 enables businesses to take advantage of existing data stores, and other sources of truth beyond the Mautic database by enabling the separation of frontend from backend code and functionality. Any database that incorporates the necessary API endpoints will be able to take advantage of the Mautic 3 frontend (and vice versa). I spoke specifically to this point in a separate blog post which I would encourage you to read if you’d like to know more about this point.

Integrability
Mautic was created from the beginning with the concept of mixins – the addition of functionality to the core platform through a Mautic Marketplace which closely resembled a CMS extension directory. This was a fantastic first version and demonstrated that the desire and interest from a community and business perspective clearly existed. With the launch of the new Mautic.org website at the beginning of 2018 we saw a massive uptick in the number of integrations and their downloads.

Businesses wanted this ability to integrate. We wanted to continue in this direction with an overhaul to the existing mixin (plugin) infrastructure and architecture in Mautic 2. But this is still only a step in the journey. Extraction of the existing mixins from core and the unique repositories linked to each mixin allowed for faster development and release of individual integrations. Mautic 3 pushes the software even further in this regards. This is done by doubling down on the API integration layer and the manner in which these mixins talk to the backend (and frontend) of the platform.

I’ll touch on this point in greater detail below as this serves to become a very unique selling point for Mautic and a massive differentiator for our platform in the market. Let’s look at that next as we discuss defensibility.

Defensibility
One of the most challenging obstacles in any company (or community) is understanding and identifying the manner in which the software being created is defensible.

Side note: When I use the term defensible I am referring to our ability as a community to offer something unique that our competitors will never in reality be able to achieve or offer to the same extent as we can.

Understanding what makes a product defensible is a challenge in and of itself and often is very difficult to do in the earliest days of a community. Therefore Mautic 3 is the opportunity where we can begin to apply our learnings in the marketing automation space and begin to clearly define those areas and functionalities where we are unique and defensible.

And this is where the excitement builds for me. This is the culmination of our learning, our open source core, and our ability to push the marketing technology landscape further. Our defensibility is found tucked into many of these business benefits I’ve shared already and will share in the next two points. The underlying mechanism and ability for our open source platform to be split and used interchangeably either from a frontend UI or as a backend API gives us the unique and defensible ability to be the first ever truly open source, API driven, suite of marketing automation micro services.

I know you’re probably very interested in hearing more about this but I’m going to simply say, you’ll have to wait for that specific blog post if you’re interested in digging in. I promise you I’ll publish it soon, because when something is this exciting I have a very hard time keeping quiet for long.

Extensibility
The concept of extensibility as different from integrability is nuanced and tenuous at points. But I would suggest the differentiation is clear enough to allow extensibility to be a separate business benefit. Just as integrability allows the software to work seamlessly with other tools “plugged into it” the idea of extensibility allows for the core functionality of the Mautic platform to be extended to additional areas and implementations.

This underscores the notion that an open source API driven marketing automation micro services platform can do far more than any monolithic platform ever could and allows for the functionality of the platform to extend far beyond the limited reach of existing tools.

Extending the platform requires an API first approach as recommended for Mautic 3. This level of abstraction provides the tools and system interoperability necessary for this business benefit to be properly realized.

Stability
The final point I’d suggest as a business benefit for moving to Mautic 3 is a tricky one, particularly because it requires more than simply creating new software. In order for Mautic 3 to be substantially superior to the existing latest release of Mautic and provide additional business value in the sense of a superior stable product implies several fundamental truths. First, Mautic 3 can’t be merely “new code” – it must be tested code. By this I mean while the code may be new, this does not require it to be either untested or unstable.

New Mautic code will not be merged into Mautic 3 until it has the appropriate unit test coverage and functional tests. Otherwise we run the risk of experiencing the same flaws and bugs as Mautic 2. Very fast releases of new features but this surpasses the number of fixes provided to reported bugs.

Therefore, in a release of Mautic 3 we can create a solid, and fully-tested platform capable of being improved upon as needed without fear of the potential to break something unrelated with each new feature release. Overall the stability of the platform is massively improved, the documentation excels, and the usability demonstrates an excellent product. (Read this post I shared recently on the topics of stability, why I think it’s important and what benefits it will provide us.)

 

May 6, 2018
Growing with GitHub and GraphQL

I wrote yesterday about Learning Something New and thought that maybe a great example would be living by example and take just a very quick post today as a follow-up to share what I learned!

Picking a problem

I’ll start with the problem. Mautic is growing at a tremendous pace and has a great foundation. (I laugh as I write that because doesn’t that sound like the most perfect problem to have?) But those words are the “pretty” way to say what is very much more real and more raw. Mautic is growing so quickly that it becomes a massive chore to keep things organized and focused. (And that’s a very real problem, for quite a few reasons.) In addition, a great foundation suggests longevity. The Mautic Community is getting some history.

Fast Growth
Fast growth comes with all types of growing pains, and without being in state of constant and continual focus things will get lost, or broken, or simply done ineffectively. Yep, growth has its downsides.

Longevity
Longevity comes with the implications of technology problems. And this can mean a variety of things: outdated technologies, or non-standard implementations, or just flat-out missing software. A history and a “way of doing things” can be problematic for the future.

Okay, so now we’re on the same page (at least a little I hope) about a few of the struggles that a growing and solidly built community encounters. As I wrote the article yesterday these are some of the things also floating around in the back of my mind. So I decided to see what I might be able to learn to help with those problems.

Sorting out a solution

I fully recognize I’m not going to solve all the things in a single Saturday learning exercise. Nor am I going to be able to learn everything on a topic that I need to learn for my own personal edification and since I’m hate feeling like a failure I wanted to pick something manageable. It didn’t take me long to find something to settle on: metrics.

For those who know me, I love facts, snippet-style facts, and specifically numbers. Therefore, having an easy dashboard for viewing statistics about the Mautic community and software. I am also particular about design and the user interface/user experience. And so I set my sights on my target for my project: Build a dashboard view for Mautic metrics made available on GitHub.

And so I started by creating a plan. Here’s the very, very simplified version of what I scribbled down:

  1. Understand GitHub and what it provides
  2. Pick a technology for consuming GitHub data
  3. Display that data beautifully.

Wow, that sounds so incredibly simple. (I thought) I’ll be done with this in 30 minutes! (I was wrong, but that’s another blog post on project estimating probably.) Hindsight is 20-20 and truly ignorance is bliss; all of which is a very good thing because I felt confident and unstoppable (an important way to begin any project). And so armed with a problem, a solution and a plan of attack I got started.

Grokking GitHub

I have a secret…for some rather undefinable reason, I don’t like the word grok (definition here). Regardless, it fits here so I’ll use it. I wanted to get to know how GitHub stored and shared the data from the repositories I was keeping with them. I was fairly familiar with GitHub already due to the amount of time I spent on the platform but didn’t really know as much, at least not detailed, about what information GitHub made available for reading programmatically.

I knew they had an API so I started my journey (and eventually my code) from this source. I would later come to realize something else (as the title suggests) but this is the first example of where for my personal knowledge set I was evaluating their bleeding edge offerings, and starting with what I knew. Remember – the goal with bleeding edge is to move fast and break things. So I set out deep-dive exploring their REST API endpoints and beginning to figure out what data I wanted to retrieve and display eventually.

Designing the Data (The Tech)

Designing the data involved how to store the information I retrieved. I wanted real-time information and I believed most everything could be either pulled directly or done so with very little programmatic modifications. As a result I decided on going database-less and using a true API-backend (in this case GitHub) as my only datasource.

This meant I could look at technology stacks that were more javascript-centric and frontend focused. Now, truth be told in this moment, I wasn’t going to fall into the trap of losing my day analyzing frameworks. I’ve been having these types of discussions a lot latest as it pertains to developing Mautic 3 and headless marketing automation and I knew I’d lose my entire day if I went down that path. So, I continued on by picking an easy one (React) and moving on. (I would later add different packages in that would be new and different for me and force me to “learn something new” in this world as well.)

At this point I was building a very simple React App that interacted with Github’s REST API and would then display the information in an easy to ready, beautiful manner. Next slide please…

Displaying the Data (The Look)

Once I had a good idea what data might be available I set out to figure out a way to lay it out and make it beautiful … while still being highly relevant and meaningful. There are thousands of resources available for frontend design so this part of the process is of course highly subjective and I chose to create something that appealed to my tastes and layouts. As with several other spots in this project, I had the advantage of taking creative liberty.

This point is truthfully the one where I recognize my own tendencies to get lost completely. I know the look in my head and will take as long as necessary to get there … pixel by painstaking pixel.

Putting the Pieces Together

All the pieces were in place, it was time to implement. I created code, designed pages, and built my proof of concept app for what I wanted it to do. And I was pleased with it, but ran into roadblocks. (Not surprising) First, roadblock to overcome was the limiting of API calls done by GitHub. I was working with a React app on my local machine that would hot-load any changes to my local site every time I saved a file. Every time it hot-loaded the page it would re-call the API calls. Thus (as you can imagine) I very quickly hit the non-authenticated API endpoint limits.

First Challenge: Implement a more advanced API call that included a personalized authentication token.

After that was resolved I continued on my quest for data supremacy and the all-knowing snapshot of our Github repository. Things continued along nicely but I found that I was retrieving far more data in some instances than I needed and in other cases I was simply unable to pull out the information I needed. I was getting frustrated.

Second Challenge: Data was incomplete or too much in the wrong instances and was not allowing me to do what I wanted.

So I walked away. That’s right, pushed my chair back, went for a stroll, cleared my head. And came back with a fresh outlook. I knew the end result I wanted so I started to step back and re-think my thinking about how I was building things out…and that’s when I decided to explore GitHub’s GraphQL implementation.

And here, this is where I had to give up my own comfort of a very familiar REST API and look at doing something different. And so I began to break things. I quickly commented out all of my REST calls and began building out GraphQL calls instead.

Pro Tip: I always start with a soft delete whenever possible so as to be able to use my knowledge again later should it prove to be helpful.

Third Challenge: Learning GitHub’s GraphQL implementation

That challenge feels very short but let me tell you, it packs a punch. And it took me some time to implement — partly this is due to the fact that every software product is different and thus their implementation of something rather standard (GraphQL) still involves understanding all the data available and the manner by which you navigate their structure to retrieve it. GitHub’s documentation was incredibly helpful in this area.

Bonus: Documentation will make your break your product. As much as you like to think it’s intuitive. It’s only intuitive to you. Good documentation wins wars.

I’ll save you a significant amount of time at this point and fast forward to my current status in this “quick” creation and my “learning something new” experiment.

Displaying the Data (The Product)

I’m excited to share with you this screenshot of what I built. It’s not complete…quite yet. There are still improvements to be made and I certainly want to explore ways to continue to optimize performance. I’ll be putting the actual site up for everyone to use in the coming days as a contribution to the community. So keep your eyes peeled for that announcement.

mautic graphql and github

As I said before, there’s certainly more to add and even as I share this I am thinking of improvements I want to make.

It’s important though to achieve small victories. Find the win. I think this was a successful Saturday and certainly forced me to learn something new. Finally, I debated back and forth about including various sources, websites, repositories, example code snippets, that I found useful along the way but decided against this due to the sheer volume of links that would involve. Not to mention the many , many dead ends and wrong examples I followed as well which might be more difficult to suss out of anything I were to share. If however, you are interested in knowing more leave a comment and I’ll be happy to answer. Oh, and there’s a bit of an Easter egg in that screenshot too. But I’ll leave you to figure that one out.

I’m off to enjoy the rest of my weekend and I look forward to seeing what you create as you continue to grow and become better. Don’t be afraid to learn something new.

API First Marketing Automation

May 2, 2018
Headless Marketing Automation

Yesterday I released a blog post entitled Looking Ahead at Mautic 3. This blog went into great detail on why I believed a Mautic 3 should be considered next on our product roadmap and then I outlined the problems (as well as some solutions) that we could solve with this next release. One of the features I shared received a few more questions than the others so I think it deserves a little bit of specialized attention.

An API First Headless Application

First of all can we all admit that is a mouthful to say? We can break it down and make it a bit easier to understand and then let’s dig into what it means and why I believe it’s a valuable step for Mautic’s future.

API First implies that every function of Mautic, every call to the database, and every interaction has to be “call driven”. This extracts the front end user-experience from the data layer (or API). This also means that the only way that the user interface (design, page layout, display elements) interacts with the data is through a series of API calls. These calls are the glue that holds the data together. API first means the system has been created in a way that the API is the only way these things happen, and every API return is formatted accordingly.

Headless: This concept is a funny one to discuss when working in software and applications. We’re not getting into the Ichabod Crane story /(though admittedly for some reason a character on a horse holding a pumpkin is inevitably the first thing that comes to my mind)/. In the software universe the concept of headless means something quite different. Here’s a definition:

…the front-end is a stand-alone piece of software, which through API communicates with a back-end. Both parts operate separately from each other, and can even be placed on separate servers, creating a minimum version of a multi-server architecture. The bridge between both parts is the API client. The endpoints of the API are connected to each other….
Headless software – Wikipedia

Earlier in that same page the first sentence distills it down even further. Headless software is software capable of working on a device without a graphical user interface. (Wikipedia)

By these definitions we see that headless makes sense particularly when discussing things such as API first.

Now, let’s take that thinking and put it into more of a practical application. Why is a headless marketing automation platform useful and desirable. Why should Mautic consider this something worth undertaking in the next major release of our software? Here are my three main points to justify such a task.

Flexibility

In my opinion, the first reason to consider undertaking a task of this size is based on the concept of improving our flexibility as a platform. If our goal is to be “open” (more on that later) then the best way we can do that is by having a platform that is flexible.

Flexibility, to me, means continuing the great work we stated where a business is able to use the software in a way that is best for their business (rather than the situation that 90% of other software operates). We want to give people the ability and the flexibility to be in complete control of their information, their data, and their software. Software flexibility comes in a variety of ways; in Mautic we’ve considered our platform flexible from the very beginning. Custom fields, highly customizable and configurable campaigns, and the ability to create software practices that match a particular business have been part of the product from the start.

The next logical step in this effort to be flexible and to continue to push the limits and lead in this area involves looking deeper at other areas where we can implement more flexibility. Separating the functional layer from the user interface allows just that. A platform where you can consume the data from any interface you desire means you have a marketing automation platform prepared for the future. Your data, made available in any manner you need. API first, headless marketing automation gives you the power of marketing automation in any visual, end product you desire.

Sustainability

The second reason I believe we should focus on a headless approach to marketing automation is for future sustainability. I don’t mean sustainability of Mautic necessarily, but more importantly stability of your data. If you are locked into a single user interface then you’ll find yourself duplicating data, moving between different databases and potentially losing information. You’ll also be tied to a more narrow focus and implementation strategy for your marketing automation because you’ll only be able to use Mautic in the manner envisioned by the Mautic community and its developers.

While this isn’t necessarily a bad thing (we’ve got a pretty good roadmap and vision for where marketing automation should be), I believe the ability for a business to use their data in multiple outlets gives a sense of sustainability to the database and security in knowing the functional aspects of the software is capable of being implemented in a variety of ways. You move from a singular marketing automation-platform-only to a situation where your data (and your marketing functionality) is able to be consumed everywhere by any other service or device.

Openness

The final reason I believe that a headless marketing automation platform is beneficial is for the sake of being more open. Mautic is built on open source. We are steeped in the knowledge that our code is readily available to anyone to review, to use, and to improve. This means that every function is understood (or could be), and that every action the software performs is easy to observe. If we continue this line of thinking it stands to reason that in much the same way, the data, and the output from those functions be easy to view, to use, and to improve. By extracting the user interface from the software and making the underlying infrastructure (API) available to be consumed by other sources we make Mautic more open.

No other marketing automation platform gives you this API-first, headless ability. You are essentially “locked in” to their user interface and their experience. (And we don’t even need to start talking about the limited API abilities of marketing automation platforms in general.) Closed marketing automation constricts and restricts your abilities as a marketer. You are forced to understand their interface, and to only view your data within the bounds of what they believe is marketing automation and how they believe you should access your information.

Mautic has always sought to do more, to be more. To provide you access to everything — after all, it belongs to you. Shouldn’t it be able to be used any way you want?


For these reasons I believe it is in best interests and the future success of Mautic to become API-first and truly headless. I hope this shares my thinking in a bit more clarity and if you were unsure before what headless meant you now have a good understanding about the topic.

If you have ideas or other ways in which a headless marketing automation platform can change the landscape and improve marketing I would love to hear them. We’re building this together, our robust and global community of marketers and developers working together create Mautic software and we have the power to envision and create the future. We are changing the landscape and we will continue to do so. It’s an exciting time to be in Mautic.

Special thanks to Don Gilbert for his help with this post.

May 1, 2018
Looking Ahead to Mautic 3

Mautic 1.0 was released out of beta on March 10, 2015. Then Mautic 2.0 was officially released on July 5, 2016. And that’s where we have continued to make improvements. This means we have been improving and iterating within the 2.x release for almost 2 years. This holds both positive and negative connotations. I’ll start with the positive.

This duration of a major release demonstrates the significant improvement to overall platform stability we have seen. It also speaks to the flexibility of the existing platform to be improved and built on top of, without major breaking changes needing to be introduced.

But there are also negatives resulting from a lengthy release cycle like this. We’re building software for the internet, the rate of change of software on the internet is growing exponentially; the technology is changing; and the landscape is shifting — drastically. By remaining in a single major version we limit the ability to take advantage of those technological advances (if we are unable to make those changes without breaking backwards compatibility).

I’ve discussed the versioning for Mautic previously, if you want to review that information but the tl;dr is we use semantic versioning.

For these reasons the time has come to begin exploring the benefits (and potential downsides) to beginning development of a new Mautic 3.0 release.

Current challenges

The first thing we need to identify is the reason why we would want to move forward with a Mautic 3.0 release. We don’t take these large transitions lightly and there must be sufficient difficulties needing to be overcome and/or new features made available by such a move. To that end, the following are the areas (in part) where a 3.0 release may prove beneficial to the Mautic product.

Symfony Versioning

This might possibly be the greatest reason for beginning our discussion around a Mautic 3.0 release. Currently, Mautic requires Symfony 2.8 and only works within the 2.x series. This series of Symfony reaches end of support for bug fixes in November 2018. Meanwhile Symfony current LTS is 3.4.9 and current released version is 4.0.9. This is a very large problem that we need to resolve. A migration from the current Symfony requirement to even the long term support version (3.4.8) requires a large overhaul to our codebase and framework due to some of the deprecated methods. (I can elaborate on this in more detail in a separate post should it be of interest)

We’re learning as a community through this process and some of the design/architect decisions we made in the early days of Mautic have been improved upon and reconciled so as to not lock us in to specific releases of a framework in the future. Regardless, this upgrade plays heavily into the remaining evaluation of an imminent restructuring and release of a Mautic 3.0 version. And as such opens the door for further discussion around framework implementation.

Frontend

The first item to be considered as an issue that Mautic 3.0 is capable of resolving involves the front-end interface. Mautic’s interface has remained relatively consistent – even through the Mautic 1 and Mautic 2 series transition. But as mentioned, the existing interface has been in place for nearly 3 years now. This clearly points to the success and clean approach that we took when designing the initial Mautic interface. However, at this point it’s time to consider an update, or facelift, to the user interface.

The frontend modifications are more than just surface level though. Currently Mautic 2.x frontend code is deeply integrated throughout the codebase. While we attempted to isolate the code to the /views folder within each bundle we have inevitably had HTML generated and output from other locations as well and this does not lend itself to a clear separation of frontend and backend. Only with a Mautic 3.0 can we overcome this and resolve this intermingling of views.

API

Mautic’s API is fairly strong, and absolutely open and flexible – you can review it here: https://developer.mautic.org. But as mentioned in the first item above, Mautic is not truly architected as API first. It pains me to say this because our API is so strong, but it’s not complete. There is more we can do. We need to take our API to the next level and make it truly headless.

The modifications necessary to our API to enable this would also require modifications to many of the functions and classes within Mautic. Touching this many areas of the system is risky and poses additional potential problems which is best mitigated during a major version release.

Database ORM

One of the greatest issues we’ve faced with Mautic 1.x and 2.x has been implementing at scale. I’ll address speed in particular in a future point but there are two main contributing factors (primarily) to our latency. Our current database ORM structure is one of those factors. Please understand what I’m suggesting. I don’t believe the ORM is necessarily the problem (though there have been open discussions about the implementation of an ORM as causing speed issues in other situations). More so I am referring to our specific implementation of Doctrine ORM. Many places suggest that ORM should be used for smaller projects with smaller amounts of data or for jumpstarting development as a scaffolding before moving on to a full-fledged data schema.

The three greatest problems with ORM-based development are as follows: First, performance degradation due to metadata, DQL, and entity processing (this adds greater overhead than simply fetching the data). An ORM often makes sacrifices in native performance features of a specific platform because of the way it “forces” a one-size-fits-all approach.

Second, Doctrine does a lot of things behind-the-scenes and without any control by the developer. In these instances, the ORM is treated as a bit of a “black box” where functions go in and data comes out with little to no idea how the actual queries are structured, or how they can be refined. Hours upon hours are quickly lost attempting to debug this data and extrapolate what’s happening “inside the box”.

The third point is closely related to the first: an ORM is quite limiting from a developmental perspective. You are unable to properly optimize your database platform for your specific use case and all queries are in this way forced to be “basic” while at the same time the associations are forced to be overly complex due to the way that the ORM manages the relationships.

Entity Hydration

The second factor which has greatly impacted our speed relates to our entity hydration. The method by which we make our queries, hydrate the results and return them is often bloated and more than necessary. As a result of this overkill we experience latent page loads. Evaluating our use of entity hydration suggests we are doing far more than we should be and this drastically effects our API call query time.

This affects our API call time due to the way the entities are hydrated. Let me explain, when we fetch and format an API payload we create DQL that Doctrine then translates into SQL and then hydrates those entity objects using \Reflection which we then pass through a serializer that reverse-engineers the entities into arrays and removes the elements we don’t want. This process also involves going back into nested associations and removing those unnecessary items as well. Finally we then package up that outcome, encode it as JSON and return it. (Can you say overworked?)

This same process also goes into our forms and the majority of our UI output. Most of the time we only desire the data, but unfortunately we are returning full objects of data that’s been converted a half-dozen times into different PHP objects and arrays before it ever reaches the UI.

Integrations

Our integrations is one item that understandably needs improvement in Mautic (either 2.x or 3.0). This is as a result of our incredibly fast-paced growth and our attempt to stay ahead of the curve. There’s no denying that as a result we sacrificed some of our “code beauty”. If you haven’t explored our integrations code existing in Mautic then I’d recommend taking a look. What has happened is understandable and yet inexcusable. We’ve added far far too much to the core package and not properly abstracted the integrations as they should be. This failure to decouple properly leads to several problems.

First, this makes plugin upgrades inextricably linked to a Mautic release. This means that at no point are we free to improve upon and release new versions of a particular plugin without waiting until the next version of Mautic is released. There’s no reason to continue this discussion as the problem here is blatantly obvious.

Second, the current integration situation adds bloat to core. There is no reason to bundle plugins with Mautic core while enforcing other plugins to exist in a plugin repository (or Mautic Marketplace) to be downloaded and installed by the user. All plugins should function the same way, reducing the overall Mautic footprint and providing a clear path for installation of desired plugins without extra baggage for unused or unwanted integrations.

While there is a path where integrations can be improved upon iteratively within the 2.x series, this is yet another factor to be weighed when exploring the potential of introducing a 3.0 release.

Speed

One final point to address when discussing existing challenges relates to the overall platform speed. I think it fitting to close this section with this point because ultimately this plays a major factor into the roadmap for Mautic in the future. Currently Mautic performs quite well in a variety of environments.

Mautic has been tooled very well to work for small to medium size databases and while the functionality services every business equally there were some limitations that began to emerge when working with large-scale database implementations. This had lead to a slowdown of various functions within Mautic and requires workarounds to improve.

Secondly, due to the entity hydration and Doctrine ORM implementations done within Mautic (partly to speed up release timing and create software faster) the overall architecture suffered. This isn’t immediately noticeable but does come to light with larger datasets and more intensive query objects (e.g. within campaigns or when creating complex segments).

Lastly, all of the above speed-related issues roll up into a degraded user experience. The goal has always been 300ms page load speeds within Mautic. While this may seem aspirational it does not necessarily mean its impossible. A rethinking of the underlying architecture gives us the opportunity to explore ways to achieve these aggressive goals and deliver an outstanding user experience.

Potential solutions

Now that we’ve highlighted several of the challenges we’re facing in Mautic it’s time to explore how we solve them. This involves keeping an open mind and looking at every possible solution path. Some of these may be far-fetched, some may be irrelevant and some may seem overwhelming. The goal in this section of the document is to review all of them with an open minded approach.

I’m going to outline the four ways I see this being addressed and hope this serves as the beginning for further discussion. It’s also important to keep in mind that these solutions are not completely mutually exclusive. There is the potential for a combination of these solutions to be implemented for the final desired result.

There are both pros and cons to each approach and rather than attempting to highlight those options in this post I will leave that for either a future post or for group discussion. Instead I’ll merely outline what each solution entails so we have a better understanding of what each represents.

Re-write on existing framework

The first option we have is to rewrite on the existing framework. At first glance this sounds the most logical and least stressful of the solutions for Mautic 3. This would involve a significant review of the existing code and a harsh look at what should be re-written or even removed. At this time, there’s not a definite answer on the amount of work involved with a framework re-write on Symfony and this will need to be explored to have a better understanding of the level of effort involved.

Selecting a new framework

A second consideration at a major release point like this is to re-evaluate the framework that has been incorporated so far and determine whether it is the best framework moving forward. This also involves a great deal of work (obviously) as the code would need to be re-written. This is precisely why I suggested you keep an open mind at the beginning of this section. We need to objectively evaluate what is the best solution with all things considered. We must step back from looking at just the code but consider everything in its entirety that would be involved with something of this scale.

Database architecture

Another area where we must evaluate current Mautic 2.x versus Mautic 3.0 is the database architecture. Our existing structure has served us well but if we are exploring the undertaking of a 3.0 series we are defining a release where we have the opportunity to make significant improvements and/or adjustments to the database architecture as well.

Currently our table schema has presented a few problems (though minor) which may be served well by a refresh. This will allow us the opportunity to improve indexing, table columns, and even the overall structure of data. (Need an example? Currently we refer to contacts, however the database table is called leads, while this may seem minor it is a remnant of a speedy release earlier in the 2.x series that should be rectified).

API first architecture (headless)

The last item I recommend be considered as we explore this stage in our development cycle is a return to the topic of API’s. I mentioned this previously in the problems definitions section. We must reconfigure our existing structure and modify our existing product to be API first. This means we need to evaluate every endpoint, identify which are missing, and extract end-user code from the output (i.e. all responses should be JSON strings).

Mautic 3.0 is the first major opportunity we have to make this improvement. Regardless of the framework selected (after evaluation which we will discuss next) this is the time we should make the improvements to our API. We must make this a priority in order to ensure that Mautic is properly headless. (Interested in why headless is important? Let me know and I can make a separate post describing the value.)

Evaluation process

Next comes the step where I need your feedback. I’m looking for end-user feedback, always, but more importantly I would like technical feedback on specific solution outcomes. This discussion has begun in the core slack channel of our Mautic Slack. I would encourage you to join the discussion there should you be interested. While opinions are welcome, those with use cases, specific data, and or use cases based on historical data will be given greater credence.

Let’s explore a few of the items to be handled by this evaluation process.

Benchmarking

Whenever there is discussion over switching a framework there is usual an instant and visceral response. This response comes from a good place but often times is not backed with the correct factual information. As a result, during the evaluation process in order for everyone to keep “feelings” out of the equation (as much as humanly possible) I want to make sure we back up our opinions with benchmarks and statistics (again, as much as humanly possible).

I recognize that the best benchmark is one that involves our own software written in different frameworks and other factors all kept as control in order to provide a clean comparison. I also recognize this is highly unlikely and presents numerous challenges and as such we must do our best to mitigate these other factors from contributing to the result. This doesn’t take into account the impossible undertaking of writing the same code on multiple frameworks simply for the purpose of extracting benchmark data.

Based on this information it is deemed appropriate to find existing benchmarks for other platforms built on each of these framework at various degrees of scale and using those as a baseline for comparison.

Specific use case evaluation

Once we have some basic benchmarks we can begin to explore specific use cases and implementations. This is where we take the best of the best and begin to build out a plan for how the various pieces might work together. Again, the goal is a non-subjective approach to the information and presenting varying use cases for evaluation.

This should not be extremely time-intensive but rather a precursory step prior to the next phase where a proof of concept is mocked up.

Proof of concept

This step is often where I get most excited myself. Sometimes others may see me arriving at this step and not fully realize I’ve worked through all of the previous steps already. I trust in this particular instance we will make this journey together and share in the excitement of a proof of concept.

As a word of caution, the proof of concept is not a final or even functional application. We simply want to test our hypothesis and theories that we have drawn from the research, benchmarking, and use case evaluations. This is the point where we create code. We build out an example of what it would entail to create Mautic 3 using the solution as defined.

There are several key things to look for with a proof of concept. Code style, readability, implementation methods, and database architecture. This proof of concept should give us visibility into each of those areas as well as a good understanding of the implications of this solution as it relates to page speed and the API results speed.

Subjective item scoring

The last part of the solutions exploration involves scoring the results from each of the solutions identified on a number of criteria. This will be certainly challenging for our community based on the first word of the heading: Subjective. It’s never an easy task (and an oft avoided one) attempting to rank outcomes where the answer is not a clear black-and-white, yes-or-no. Instead we have to consider all potential benefits and detriments to each solution. We have to weight them according to their perceived merit and potential value.

There are a number of factors that contribute to the success of a solution and while I have highlighted the technical solutions first in this particular post there are others to be considered as well. I will be writing an additional post that will focus on the extraneous factors and how they affect the Mautic product either through a 3.0 release or implementing an update to the 2.x series.

Next steps

So, now that we have this outline for what we are looking to accomplish and evaluate from a code perspective with a Mautic 3.0 release potential we need to begin focusing on how we best accomplish these goals. Here are the first three steps I am recommending we take as a community as we push forward with exploring Mautic 3.0

Organize a team

First, we need to organize an evaluation team. This should be a team with technical ability primarily as the majority of the items listed above are highly technical in nature. There will be a time and a place where the greater community will be able to voice their input and opinions and the subjective feedback from the community at large will be desired at that time. This initial team should be developer-centric given the tasks at hand.

Formulate an evaluation matrix

Once we’ve gotten a team organized and we have carried out the steps for the potential solutions listed above we can begin to draw some results and conclusions. The best way to do so is to compare an evaluation matrix where we can properly identify the pros and cons for each solution recommended. This will help to remove the subjectivity and allow us to focus on the best and most strategic paths forward.

When creating this matrix we will also consider additional items such as time to implementation and community involvement. In addition to picking the most technologically sophisticated solution we must also match that with the existing skills of our community and determine if we need to reach out to other communities for assistance as we seek to grow properly.

The evaluation matrix will not be evaluated at this point and a conclusion drawn but rather be the culmination of the work done to date and distilled down to a meaningful format which can be easily shared in the final step.

Prepare an RFC

The final step in this evaluation of the Mautic roadmap involves preparing an RFC for dissemination to the community. This is where we seek to get feedback, support and buy-in from everyone. We want to ensure that our community as a whole agrees with the decision made and more importantly agrees because they have received the proper factual information. This is where the evaluation matrix will offer a great deal of insight and information.

This will be a great milestone for the Mautic community as we continue to push the boundaries on marketing automation and the technology used in our software. We are capable and equipped for defining the future of the marketing automation space and this is our next big step in that direction. I hope you can tell the excitement I have not only for the outcome but also for the journey as we grow. I look forward to seeing what comes next!

Special thanks to Alan Hartless for his feedback to this blog post

March 14, 2018
Introducing Saelos: A Personal Project

Some of you may have noticed that it’s been a little while since I posted a longer piece on my blog. That’s not because I haven’t wanted to but because of some other things I’ve been pouring every spare second into. Literally every spare second. We won’t get into a discussion on the topic of sleep habits (maybe I’ll come back to that one – it’s interesting) but suffice it to say, the time I’ve spent has been all-consuming.

But that’s what happens when I am passionate about an idea and want to see it developed. I lose myself in it. I can’t help but think that’s normal though right? Don’t you do the same thing when it’s something you’re incredibly excited about? Regardless, I’ve come up for air now and decided it’s worth taking a few minutes and letting you in on my personal project. This is just something I personally believe the world needs and a shift in a current status that I think can improve lives and business for everyone.

If you’ve never read the book What Do You Do With An Idea? you should stop right now and pick up a copy). It’s a children’s book so don’t fret – you can finish this one in a few minutes.

So, what do I start with? The solution? The problem? Oh, wait, I know. I should start with why. Let’s do this:

I’ve watched people keep lists my whole life. I’m a list maker. I love making lists of things and keeping track of how I’m doing as a result. So when computers came around people wanted to create lists on it. Natural progression. As is always the case with software one list became multiple lists. Then people dreamed up the fabulous idea of using those lists for showing more information. What a beautiful thing. Now my list of a sentence could be an entire paragraph or more. What if I wanted to define fields and then allow other people to edit and update my lists. At a very high level this concept of a list item gradually turned into the idea of a single record with lots of data in it.

Okay, everyone with me so far? I’m going to jump ahead a few steps so keep up we have to start moving faster (I have a thing for speed). Companies began to create software to help with managing these records. They all started with this idea of building a list and adding more and more fields to each record. Display the record differently, use different names for the list items (or objects) then package it as a different software product. Rinse and repeat. This was the state of the world and then the internet came along and these companies all moved their products to the web. Same product, same thinking, but a different medium. (C’mon people, this is “the internet”)

But the thinking that built these platforms was inherently the same (for the most part). One company even went so far as to attempt to use a “No Software” logo, which attempted to suggest a new paradigm shift in business work, but this was the same thinking, different platform. The software was never the problem. The thinking about how it was built and the implementation of how it functions is.

This problem of record management was something that I both heard about and experienced myself. A world of “apps” all performing the same CRUD tasks (Create, read, update and delete – Wikipedia). Then you could even build your own app on top of an app to add more of the same with different fields you wanted to track or different functionality you wanted to have. But the underlying system was faulty. And though the underlying code was constantly being added to and tweaked, it was the same framework. And I believe people need a change. I believe software today is fundamentally different from software of yesterday. Below are my fundamental principles about software.

Software should:

  1. Be modular not monolithic.
  2. Be extensible.
  3. Solve specific problems.
  4. Be active.
  5. Be open.

Those five fundamental beliefs shape the work I do and the projects I work on. I absolutely believe this. Let’s look at each very quickly.

Software should be modular and not monolithic.

My statement here is more than just the concept or idea of bolting more pieces onto a base. That’s the app tacked on to an app tacked on to an app idea which businesses today attempt to do. By modular I mean the core functionality should be able to be removed, replaced, rebuilt and improved upon.

An example of this lives in the Mautic platform. Mautic functions as an omni-channel marketing platform. Fancy words, simply put, Mautic lets you market across email, sms, social, web, mobile and more. The channels are fundamental to the software, but they are completely modular. Want to use SendGrid instead of SparkPost to send your email, no problem? Have a different ESP? Drop in your credentials and go. What about SMS, Team Twilio or Team Plivo, the choice is yours. Mautic is fundamentally modular in it’s approach.

Software should be extensible

I believe the idea of extensibility is broken in many cases. This idea is commonly misconstrued in the world through the proliferation of “App stores” within product companies. (/I’m not referring to Apple or Android which provide applications to be run on an operating system./) Software being extensible means something quite similar to the first point. Modular software can be easily extended to include additional functionality while not losing its core purpose. But extended to a different ecosystem (or app) through the open exchange of data.

Software should solve specific problems

Too many software companies today try to be all things to all people. The world is not your target market. I believe in the idea that software should be accommodating and flexible but this doesn’t mean it’s a one-size-fits-all approach. Flexible means the software should fit your business rather than your business conforming to the software; but the problems to be solved are unchanged. This means the processes might vary, but the solutions remain the same.

Software should be active

I know this point sounds funny but the majority of today’s software is passive. It responds to requests, it dumbly regurgitates what it has been given and spits out a mangled version of an answer … when asked a question. I call this passive software. In contrast, I believe software should be active. It should proactively assist me in performing a task or reaching a goal. (These many software assistants are absolutely passive but that’s another post.) Software should be helpful and active in enabling the user to move faster and do more, intelligently.

Software should be open

I left this one to last because it’s the foundation. It’s the belief on which everything thing else is built. I believe software should be open. There are countless studies, reports, and white papers on the reason why so I won’t sidetrack this discussion. I believe in open. An open software empowers people and enables the other points above.

Relationship Management

All of that brings me to a problem I face. I needed something to help me manage relationships. But everything that existed in the world was old, bloated, slow, inflexible, and closed. Yep, pretty much the exact opposite of everything I listed above. It’s frustrating. But solvable. I believed I could create a platform that managed relationships but was built on the principles I believed in. So I started working.

I wanted to create something blazingly fast (regardless of the number of people) being managed. Something modular that could be easily extended, something that solved some very specific problems, something active in its interactions, and most importantly something open.

I’d like to introduce Saelos. An open platform built on the software principles I shared. The purpose: customer relationship management. That’s a tricky one to say because instantly I’m sure you conjured up one (or more) companies that offer a solution under this label. But Saelos is different. Very different. Because Saelos is something I’m calling active software. By this I mean that not only does it manage customer records differently but it also actively helps you maintain connections, build relationships, and accomplish your goals by actively assisting you. Task completion, recommended actions, process improvements, and intelligently created reminders are just a few ways that Saelos will do things for you.

Let me be clear: Saelos does far more than just incorporate another workflow builder and follow simple step-by-step procedural tasks created by a user. Saelos builds them for you, and then executes them. And informs you along the way, all the time enabling and empowering you to do more of what you should be doing with the right person at the time you should be doing it.

Possibly the best part of this entire project is that Saelos is built on the right foundation: open. This means Saelos can be downloaded, self-hosted, installed, configured, and improved upon by everyone.

I have so much more to share and I can’t wait to show you what’s been created. I’ll be giving early access to the project I hope by the end of the month. (Friday, March 30 2018). I know it’s forbidden to set a date in software development…but I’m feeling pretty good about this.

Are you interested? Do you believe in the same fundamental principles as me? Would you like to experience something different in how you manage and interact with people? Sign up for early access here, and let me know. Oh, and subscribe to notifications on my blog to be on the insider’s list. I’ll be posting more information and (hopefully) screenshots in the days and weeks leading up to the release. It’s coming fast, I hope you’re as ready as I am.

March 5, 2018
Blockchain Bonanza or Bitcoin Bubble

I didn’t post anything in 2017 about the concept of blockchain and I’ve been diligent in not posting anything so far in 2018. Now I’m finally going to break my self-imposed silence. I’m choosing to now because it seems that the initial craze has worn off a bit and thing are finally starting to normalize. (At least that’s the impression I’ve gotten in recent days/weeks). Sure there’s still plenty of news and publicity surrounding the technology and there’s the occasional doomsday post but the rabid chatter that the everyday individual engaged in seems to have faded.

As a result of this decline I think it’s finally time to share some of my thoughts and opinions (not to stir things up again but because I believe my post now won’t be seen as my feeble attempt to jump on any bandwagon). The concept of blockchain technology is profoundly revolutionary to our world but you have to look far beyond the early beginnings of a cryptocurrency and the current proof of work mining efforts.

If you are not yet caught up on the topic, there are literally thousands of articles to help you. This one is a great read, but my personal favorite is this article: WTF is The Blockchain? – Hacker Noon. After you’ve read this (and hopefully others) then you should have a much better understanding about how the blockchain works and functions and maybe a hint about why it’s so important for the future. (And no, that’s not just hype talking).

Now, as is usually the case, anytime something new is announced you get the first rush of early adopters. In some situations those early adopters are quiet, excited enthusiasts doing fun things, exploring the limits of the newest frontier on their own and happily doing so. In other situations those early adopters see the potential in something and begin shouting their praise every way they can. When those situations happen the rest of the general population can’t help but notice and begin to pay attention. Again, nothing incredibly new or different here. But every once in a while in rare instances something else comes into play. Money.

That’s right, blockchain might have happily been created and begun to spread in the usual manner, but instead bitcoin was the primary vehicle by which the technology was propelled into the spotlight, and the money changed everything. Not instantly, but when it took off, it really took off.

But this post isn’t about the history of the blockchain or even the debate over bitcoin bubbles. Instead, as I began with, I am excited to talk about the future of blockchain and explore what the blockchain might be able to provide for different verticals besides cryptocurrencies. And no, I don’t consider CryptoKitties | Collect and breed digital cats! to be the full extent of the possibilities. Although I do admit to owning a few myself.

I truly believe in the fundamental concepts behind the blockchain. Maybe that’s partly because it seems to be the next generation of open source. We’ve seen the world gradually come to accept open source software as the new normal and the studies are in: open source software is eating the world. Almost every major company and organization participates and uses open source software in some manner in their business. And what I see in the blockchain suggests that this will be the future next generation of open source. Decentralization. No single source of controlled power. Democracy ruled and available for everyone to participate in. Sound familiar? It does to me.

Open source and blockchain share a lot of the same principles (along with some of the same, familiar opposition). And I am excited to encourage and push the boundaries of the blockchain much in the same way as open source. We can do this by examining what ways blockchain can be used “outside the box”. If we look at blockchain applications (and there are many already) we can begin to see how versatile the platform is and how it can be used. But just like the saying goes:

”…it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” – Abraham Maslow

This is referred to as the Law of the instrument and is a common cognitive bias and is especially important to consider when exploring new technologies (like blockchain). Just because it’s new doesn’t mean it’s right for everything. So while it is important to think outside the box, we must still at the same time also consider the most effective tool for a job. It’s more important to identify the job and then pick the right tool then to take a tool and try to find a job it will do.

This frames my thinking about blockchain and the future of software as I see it. I believe this is a powerful tool and will be the basis for many new innovations in the future. I also believe that with this tool we will be able to improve security, openness, transparency and trust in software systems. And finally, I believe that blockchain is a fantastic tool, but it should be treated as a tool and selected only when it is right for a particular “job”. We should push the boundaries. We should explore new technologies. We should do this thoughtfully and intentionally. Join me and let’s begin creating the software for tomorrow’s internet.

Interested in hearing more about what I’m thinking and working on? Let me know and I’ll write a follow up post with some greater detail; otherwise I’ll share it with you when it’s ready.

February 20, 2018
Cyborgs and AI

I am almost embarrassed to write this due to the lateness with which I am apparently reading this particular blog post, but for those that have beat me to it I beg for your patience while I get a bit excited in sharing my personal discoveries. Okay, with that being said let me dig in and get a bit more specific.

I recently stumbled across this article, Neuralink and the Brain’s Magical Future – Wait But Why and it’s done wonderful things for putting real words to some of the thoughts I have been entertaining regarding the future of AI and humanity. If you have a free 30 minutes (maybe an hour or two) then I can think of very few things more worth the investment of your time. Take a deep dive into this line of thinking and expand your horizons.

I can tell you that a few of the concepts here touch on topics that I have personally been very excited about and have begun discussing with those that work closest to me. I agree with the observations concerning the direction of our current technology and I also agree with the concept of how we successfully navigate the perceived dangers of AI. Bottom line: An integrated AI is where I place my hopes and intents for the future. And although there are a couple of areas in the post where I take a different line of thinking from those shared in this article there are many others where I agree. A fully integrated tertiary layer that improves upon our “output” will revolutionize our future. External vs Internal supplemental AI seems a moot point that society will need to reconcile in time.

If you’d like to read what I’m reading and are curious what my thoughts are about where we go from here, then this article gives some great insights to get you started. Read it and lets talk. Read it and challenge your own thinking. Read it and challenge mine. As I shared on my short form post, Opinions this ability to form, express and differ in those opinions is what will improve us all.

February 15, 2018
Exploring Serverless PHP

I love reading about cutting edge technology and exploring what will be coming next in tech. Most recently I have been reading everything I can about serverless architecture given the growing number of articles and discussions surrounding this trend.

Most recently I read this article, Rise of Functions as a Service: How PHP Set the “Serverless” Stage 20 Years Ago which very clearly discusses the changes in our technology even if it is several months old. I really liked the comparisons to the early days of PHP and how it relates to where we go from here.

I am eager to see how things like serverless architecture can be implemented in modern software applications like Mautic, or others, but continue to struggle with the fundamental disconnect between these FaaS platforms and a PHP-based software application. I am beginning some exploration in this regard through the use of some different connectors (like the one shared in the article above).

Anyone interested in learning with me (or showing me what they have already done) I have begun my experimenting with this framework: Serverless Framework – Build applications on AWS Lambda, Google CloudFunctions, Azure Functions, AWS Flourish and more and plan to update my blog with my progress as I explore this in greater detail. So far I’ve discovered several libraries that offer integrations and/or frameworks for PHP and have settled on using GitHub – araines/serverless-php: PHP for AWS Lambda via Serverless Framework this resource for starters (don’t hold me to it as I may change this later). I liked this one because it uses Symfony components which is what Mautic uses in core already.

I don’t know if this will be something that we use with Mautic, but I am proud that we currently use the most well-recognized (and still considered cutting-edge) software with Kubernetes. There are always challenges when building out large-scale applications particularly when you want to balance contributing to an open source distributable platform and also create a world-class SaaS platform based on that same code. Kubernetes and Docker containers have given us the ability to do this and I’ve been incredibly pleased with the results so far. (If anyone is interested in hearing more about that I think it might make for an interesting topic for a future post).

For now, I’ll continue to explore how Mautic (and other PHP applications) might be able to take advantage of Functions as a Service frameworks to scale even faster.

February 10, 2018
Phone Screen Experiment

Recently the hype has been growing surrounding the removal of various social media apps from cell phones. Whether the action is due to social pressure, personal resolution or otherwise the outcome is the same – reduced social media usage. I’ve read story after story of people doing this either as an experiment or as an attempt to overhaul their time spent on social media.

In fact, most recently I was excited to hear my good friend, Dries Buytaert also started a blog series outlining his process in replacing his social media posts with more relevant and meaningful posts on his own blog. Personally I think this is the best approach I’ve heard of so far.. Studies have clearly shown that removal of one social media app merely causes a corresponding rise in use of a different one. Dries’ approach carries many benefits not only in the decrease of social time spent but possibly the greater value lies in the increase of quality content that he is now placing on his blog. I’m also a huge fan of the furtherance of an open web mentality that comes as a result of his decision.

I removed the various social media app from my own phone late last year and have found it to be an excellent decision. I’m spending more time on what I consider meaningful activity. But as I’ve progressed this year I’ve continued to read blog posts (like this one) that would suggest the replacement principle is still at work even in my phone usage and even in the absence of social media. And so I am going to continue on my own journey of exploration and hopefully self improvement. Let me explain.

Many individuals like to share the home screen of their phone. It tells everyone what apps they deem most important and what apps they want to be able to access quickly. Here’s my current home screen.

That’s not a mistake. Not only is this my home screen (because I know some of you will think I’m cheating and using screens to the left or to the right) this is my only screen.

My current phone of choice is a Google Pixel 2 XL . I’ve been using the Pixel line for 2 years and love it This means I’ve been using the Android OS for a while and it allows this level of customization. Let me explain my thinking and the reason behind this change.

I should begin with the problem I wish to solve. Too many times I have found myself grabbing my cell phone and tapping an icon simply to occupy myself. Whether that’s out of boredom, awkward shyness, or habit. None of these are acceptable reasons and yet countless times a day these feelings would trigger my action to grab my phone, unlock it, and tap an app. I believe this is a complete waste of time, and more importantly a waste of brain power.

Secondly I’ll share my idea around a solution. I found that 9 times out of 10 my mindless phone usage was begun by opening an app on my home screen. Now, I’m sure you can already guess why I made the adjustments to my home screen that I did and although drastic I’d like you to read about my observations before coming to your own conclusions that I’m crazy or pointlessly radical.

By taking this drastic action I now had to click the home icon (the middle button in the screenshot above) and then scroll down to the app I wanted to open.

Side note: In addition to changing my home screen I also downloaded and applied an icon pack to all of my icons. This keeps the apps from looking familiar and forces my brain to actually look at each app’s name to find what I’m looking for. I may write a subsequent post on this due to some additional interesting finds.

Now I know you’re thinking that I’m wasting valuable time by forcing myself to jump through the hoop of opening up the app screen and scrolling for the app I wanted to use but the truth is actually quite different.

I discovered that the majority of my legitimate phone usage came from responses to notifications. Keep in mind that I’ve removed social media apps from my phone so the notifications I see are now mostly surrounding email, Slack, text, or other personal and relevant communications. So first observation: I was not significantly hindered in my interactions with others as a result of this home screen decision. In fact, my engagement levels were the exact same on tasks that involved actual phone tasks (as opposed to the mindless phone usage).

Secondly and perhaps even more alarmingly I discovered just how frequently I would grab my phone and unlock it without having a purpose to do so. I’m sure everyone knows this is obvious but now that my apps were a two step process further away from my finger it broke the mindless app tap that normally existed after opening my phone up. Instead now I found myself staring at my blank home screen unsure what I was actually doing. This was amplified by the times that I actually tapped the home screen button and found myself staring at a list of apps with no idea why I was there. Wow. For me this was a huge wake-up call. I had no idea the overwhelming number of times I was mindlessly opening my phone.

There are all types of excuses for not doing something drastic like this but I’d suggest ignoring your dopamine-addicted tendencies and consider radical action. 😉 I can tell you from personal experience that so far this experiment has been an incredibly eye-opening opportunity and one I plan to continue. I’ll share further observations in future posts as I continue this journey into proper phone usage and how to take back control of my time and my mind. And of course I would love to hear your thoughts and opinions in the comments below regarding your own phone decisions and radical action.

December 13, 2017
Standardizing GitHub for Product Management

GitHub is a fantastic tool for organizing code, handling issues, and tracking feature requests. Mautic has always used GitHub for its code repositories and more. But there are struggle that come from tools like this and without proper organization or structure it can quickly make a project become a chaotic jumble of questions, feature requests, and code. A lack of standardization destroys an otherwise useful system.

github_mautic_labels

I am proud of the organization that Mautic has around issues and the use of labels. This has historically been something we’ve done somewhat well. We’ve also made extremely good use of the basic code repository and release functionality. You can see this well-organized approach by diving into the Releases tab and noting how every release going all the way back is documented and tracked. It’s refreshing to see things like that and speaks to the consistent attention to detail we’ve worked so hard to maintain.

github_mautic_release_list

But I firmly believe there are always things that can be improved and our GitHub usage is no different. So I sat down and looked at our repositories and began exploring ways we could improve our organization and structure around our already amazing product.

Improving and Explaining Labels

I shared at the beginning that Mautic is very good in its use of labels as it relates to issues and pull requests. But I think sharing what those labels mean and how they are applied is helpful as we discuss a standardization of our GitHub account and organizational structure. Plus, I’m not sure it’s been clearly outlined recently how those labels are used.

Label Meanings
Here’s a current list of our existing GitHub labels, when they are applied to issues, and how they should be interpreted.

  • Backlog: Applied to any issue left in an open state longer than 6 months, not automatically applied yet but will be in the future.
  • Bug: Applied to issues directly related to a bug in the production code.
  • Code Review: Applied to issues that require additional review of code by core and community developers before merging will occur.
  • Duplicate: Applied to issues that have already been submitted. New issues will be closed in deference to older ones.
  • Feature Request: Applied to new features or modifications to functionality that is different than originally intended.
  • Has Conflicts: Applied to PR’s mainly that conflict with other parts of the Mautic codebase. Must be resolved before merging.
  • L1: Applied to issues where the fix is deemed lowest level of knowledge to create. Alternatively applied to PR’s where the testing is considered minor in time and intensity. Considered a Level 1 item.
  • L2: Applied to issues where the fix is deemed to require a moderate level of knowledge to create. Alternatively applied to PR’s where the testing is considered moderate in time and intensity. Considered a Level 2 item.
  • L3: Applied to issues where the fix is deemed to require a significant level of knowledge to create. Alternatively applied to PR’s where the testing is considered significant in time and intensity. Considered a Level 3 item.
  • Needs Automated Tests: Applied to pull requests submitted without the appropriate unit tests. Every pull request requires unit testing of code.
  • Needs Documentation: Applied to pull requests submitted without the appropriate documentation. Every documentation requires accompanying documentation.
  • P0: Applied to issues considered critically important to resolve. These issues are ‘showstoppers’ and ‘break’ the entire Mautic system.
  • P1: Applied to issues considered detrimental to Mautic functionality. These issues are important but do not stop day-to-day operations.
  • P2: Applied to issues considered annoyances to Mautic functionality. These issues are high priority to fix but do not restrict usage.
  • Pending Feedback: Applied to issues or pull requests that require further information or discussion before being added into work queues or testing.
  • Pending Test Confirmation: Applied to PRs that still require a second successful test confirmation. /Every Pull Request requires 2 successful +1 tests before merging./
  • Ready To Commit: Applied to PRs that have been tested and are ready to be merged.
  • Ready To Test: Applied to PRs that have been submitted with completed code and are ready for community testing.
  • Translations: Applied to issues that are related to translations. /All language translation strings are handled by Transifex/
  • User Experience: Applied to issues and PRs that are related to how the end user uses Mautic and experiences the platform.
  • User Interface: Applied to issues and PRs that are related to the user interface directly, typically these are cosmetic problems.
  • WIP: Applied to PRs that are not ready for testing. These are Work In Progress and represent work actively being done by another individual.

I recognize that list feels long and possibly a bit daunting but I trust the somewhat exhaustive explanation will help clarify how Mautic applies labels to issues and pull requests and makes your life a bit easier as you contribute to the Mautic platform. Other labels may be added in the future either for temporary usage or as additional labeling is needed.

Improving Feature Requests

The way that Mautic handles feature requests is quite simple at the moment. We have a label marked Feature Requests which is applied to every issue created that represents a new feature request (See I told you it was simple.) There are a few problems with this basic approach to feature request tracking. First, this drastically clutters and increases the total number of “issues” on the project (notice the screenshot above – 728 Issues). This number actually includes 314 Feature Requests and inaccurately skews the number of issues. Second, with the current system feature requests are lumped in with every other issue in the system and it makes it exceedingly hard to identify what features have been requested and which are the most popular. This leads me to the first standardization I am recommending we implement in GitHub.

Standardize Feature Request Voting
Moving forward I recommend that the somewhat new “reaction” feature in GitHub be used for feature voting.

github_feature_request_reactions

Specifically, only the following two reactions should be used for feature request voting: 👍 and 👎. A thumbs-up indicated a +1 vote on the feature and a thumbs-down indicates a -1 vote on the feature.

This will then allow anyone to quickly view the issues on GitHub and with the proper query be able to see a list of feature requests sorted by popularity. To make it easier for you here is the exact link you can use:

Most Requested Feature Requests · mautic/mautic · GitHub

As a bonus side effect of this standardization we will be able to programmatically pull this list of issues via an API and allow us to inject a Top Feature Request List in other locations as well.

Improving Pull Request Testing

The second area where I think a bit of standardization may be helpful involves the testing of pull requests. Currently there is a core team in the community managing the testing and committing of pull requests. This ends up appearing to be a bit unclear and not as visible as it should be to the community as well as not demonstrating a clear path for community volunteers to grow into various leadership positions. The core team is open to community members who demonstrate their ability to properly test and merge code contributions. However, the privilege of that responsibility has to be earned through a building of trust. That trust can’t be established without some method of consistent demonstration of personal reliability.

This brings about my second recommendation for GitHub standardization.

Standardize Pull Request Testing
Our Mautic community needs greater empowerment as I shared above in playing an active role in which pull requests are merged. Our volunteers also need the opportunity to grow into greater leadership roles and become critical parts of the core team. This means community testing of pull requests. Specifically, a +1 listed as a comment on a pull request implies that the developer has tested the PR in accordance with the test procedures listed in the pull request description.

github_pr_testing_overview

Alternatively, adding a  -1 comment signifies the developer has followed all the same steps and procedures outlined in the pull request but did not find the fix to successfully solve the issue.

github_pr_testing

Notice in the above comment additional text was added (this is acceptable as long as the comment begins with the appropriate +1 or -1 designation.

And you guessed it, as a bonus we can programmatically use the comment text to extend the use of GitHub to other systems and improve communications as well as automations.

It may not sound like a terribly fancy system, or even a huge difference in how we manage our GitHub account currently but these little improvements and standards in how things are managed make our entire development process better and more streamlined. And the more organized we are, the more efficient we can be. The more efficient we are the better and faster we can grow.

You now have all sorts of new knowledge and insight into Mautic development, go find your favorite feature requests and 👍!

Free marketing automation software is a tool.

January 12, 2016
Free Software and Success

Marketing automation is highly complex. A free app gives the wrong signal as if everyone with MA can be successful.

I recently saw this tweet and it annoyed me. The foundational belief that if something is free it cannot therefore be of real value is completely and totally false. Availability has never implied success. Cost does not unequivocally equal value. Granted there are many areas of life and the world where a brand may charge a premium for a similar product. You may find yourself paying for a logo, or a particular “name brand” recognition, but this hardly means the higher the price the greater the value.

The reverse is even more fallacious. The more affordable (or even free) price does not automatically relate to the quality of the product, the value of the software, or even the ability of this software to be helpful in future success.

A free app means the availability of the raw goods, the resources, are available without cost. The impetus still lies within the business to correctly implement the software to be successful. Let’s take a different perspective.

Imagine you find a stunning piece of software, it’s beautiful, it’s highly functional, it does absolutely amazing things. But you can’t find the price anywhere. You’re convinced this software is just what you need so you agree to begin using it regardless of the price. Now, you have two possible outcomes, you either fail to successfully implement the software and it sits there, beautiful, shiny, untouched. Or, the second option, you take this software run with it, implement it, and it makes your business incredibly successful. You’ll notice one thing that’s not revealed. The cost. Through this example what we discover is that the price of the software plays absolutely no role in the eventual outcome.

The price of software tools used should never be thought of as an indicator of the business’s eventually success.

Now, marketing automation has traditionally be considered complex, detailed, and difficult to use. But the status quo exists to be broken. Disruptive organizations, like Mautic, demonstrate this fact. Mautic revolutionizes the marketing automation industry with convenient, easy-to-use, intuitive marketing software. Mautic empowers everyone, and gives each the tools they need to be successful. Mautic gives the raw product. Mautic supplies the things necessary to be a success; but does not guarantee it. And an interesting fact, as we look at Mautic and what it has the capabilities to do, we haven’t once discussed price.

This leads to two obvious and glaring contradictions to the initial suggestion. First, marketing automation is no longer complex and difficult to setup or use. Second, Mautic doesn’t make you successful any more than having the various parts to a bicycle means you can ride one. Regardless of price, software is a tool to be used to accomplish a goal. You can read more about this theory in a recent marketing automation tool article on Mautic.org.

Bottom line: Don’t reject something new based on preconceived possibly erroneous notions.

Marketing Automation Tool Metallica

November 30, 2015
The Importance of Marketing Tech

Recently I answered a question on Quora about the efficacy and “rule of thumb” for the benefits of marketing technology and how this tech should increase revenues. I thought it was a great question and followed a train of thought I have recently been pursuing so I added my answer to the page.


I believe you will be hard-pressed to find any definitive metrics for how efficacious marketing technology is for a business. The reason for this is in part related to a previous blog post I wrote on Mautic.org. The short version, summary, of that post in essence says that marketing automation platforms and other marketing technologies should always be considered tools to be used and not solutions. Here is what I mean and how it relates to this question. Let me use an analogy to make it easier to understand.

I’m very interested in bass guitars. I love the idea of laying the foundation of a musical rhythm the rest of the band then builds upon to create beautiful music. Bass guitars come in a variety of sizes, shapes, and styles. Each has their own beauty and their own purpose. They are powerful tools that, when placed in the right hands, can be used to impress and stun the audience. But, if I were to give a bass guitar to my son (awesome kindergarten kid) the result would be vastly different. Obviously you naturally and instinctively understand this difference. The guitar didn’t change-the player did. And the results are completely different.

The analogy should be fairly self-explanatory. Those same principles apply to marketing tech. These are tools to be used and with the right marketing department they can impress and stun the C-Suite and others. Inexperienced or new marketers will find the benefits far fewer and their path much different.

Once we’ve established this baseline understanding there are numerous metrics and statistics which demonstrate what is possible with effective marketing strategies. But remember, you should think of this like putting a Rickenbacker 4001 in the hands of Cliff Burton. If I were to pick up the same instrument my results would be different. Here are a few statistics floating around regarding marketing technology and improvements in efficiency and costs. Your results may vary.

Marketers who implement marketing automation see 53% higher conversion rates and annual revenue growth 3.1% higher than others.
http://aberdeen.com/Aberdeen-Library/7603/RA-marketing-lead-management.aspx

Email marketing has an ROI as high as 4,300%.
https://imis.the-dma.org/bookstore/ProductSingle.cfm?p=0D45047B%7C4DA56D9737FF45DF90CA1DA713E16B80

Successful lead nurturing programs average 20% increase in sales opportunities.
http://www.demandgenreport.com/industry-resources/white-papers/204-calculating-the-real-roi-from-lead-nurturing-.html

So, there’s three quick stats, a quick google search will yield hundreds more. The key here again, is that the marketing automation platform, or the marketing technology used is only the tool to help you be a better marketer. The right tool can save you hundreds of hours. Pair your expertise with a powerful platform and the results will be epic.

*For full disclosure, I contribute to the Mautic, free marketing automation platform, and have a strong bias to the belief that a powerful platform doesn’t have to cost a fortune. Mautic is an open source tool capable of helping you rock out like Metallica.

OS X Yosemite

October 17, 2014
A UI Treat from Yosemite

This may sound silly. In fact you may laugh at this but I have to share it anyways. Recently as some of you know I had to send my laptop back to Apple because the video card in it went kaput. It just quit working and was making the computer constantly shutdown. When I got it back I found out they had completely wiped the hard drive. I was going to have to start completely over setting up my environment. Oh the pain. All the work I’d done configuring multiple versions of PHP and my local development tools. Oh well, the screen looked amazing and the video card was working.

I decided as long as I was having to start fresh I might as well download the latest release of OS X and play around with Yosemite. I had previously watched the keynote when Yosemite was announced and I must admit I wasn’t taken by anything spectacular. Nothing made me catch my breath or decide I had to have it (obviously as I hadn’t downloaded it before). But now that I was starting fresh I had nothing to lose. So off I went to download the beta.

The Search Command

While I still don’t see huge change or differences which make me really amazed there are a few things which I have found I absolutely love. I’ll share two very quickly. First for those that know how I work I am always on the keyboard. I rarely use the mouse and try to do as much as I can without moving my hands from my keyboard. As a result the CMD+Space shortcut to launch search and then type the program I want to use is a huge favorite of mine. It’s almost second nature to hit the key combo and look to the top right to begin typing the app name. Well, with Yosemite they’ve brought this feature front and center – literally. Now I can look in much larger font and much more detail as I enter the program name. It’s pretty cool.

The Context Menu

The second feature is much more subtle. It’s incorporated in several different layouts but I notice it most when using the right click. I think they’ve called it the frosted glass look. It’s subtle, but I love it. Something about the semi-transparent nature of the context menu just feels right. I don’t know quite how to describe it or quite what I would say is the reason for my love. But I enjoy it. Now I admit I don’t see it much because I use the keyboard mostly (refer to the point above), but when I do find myself using it I like it.

Yosemite and You

If you have the opportunity to try Yosemite take a look at these features and see for yourself. Sometimes the little UI treats are the most important. I think that’s a great lesson to take away. It’s not the next game changing operating system and it doesn’t do something completely revolutionary, but the little things matter. The little touches which make something stand apart are critically important. Remember this as you’re working on your next project. What is your frosted glass moment? What can you add to make your users’ experience unlike anything else?

 

Coders are Creative

September 18, 2014
Programmers are Creative

Programmers are often not the first profession which comes to mind when considering creativity. Everyone likes to mention the designers, and the front-end UI/UX people when they talk about being creative. And while those roles are certainly the most visible and visually creative roles there is a certain level of creativity involved in programming as well. And so for all the programmers out there – here are 3 reasons why programmers are creative.

1. Programmers Solve Problems Creatively

I know, you were expecting this for the first point weren’t you. This is what we always tend to laugh about. Programmers doing whatever to “make it work”. Or the famous code comment which says, “Don’t touch this, it works.” Of course we consider this to be a bad form of creative programming and yet it hints at something deeper. Obviously there must be some level of creativity. I’ve seen some examples of code which look beautiful on the outside and function but when I dig deeper into the code I’m amazed. The creative “workarounds” (that’s an affectionate and oft-used name for this type of coding) which are in place sometimes leave me speechless.

But there are plenty of great examples of solving problems creatively. Programmers have a unique way of analyzing and solving problems that others would leave others completely stumped. They are able to analyze and determine alternate methods for getting the results they need and often times won’t take no for an answer. It’s highly creative.

2. Programmers See Code As Beautiful

One of my favorite movies of all times is the Matrix. It’s a classic movie and one which holds a special place in my heart. I remember with great fondness watching the coders stare at the screen and “see” the world. They no longer saw the code but instead they saw the world as it existed around them. I loved that idea. I have always wanted to be like that and while there is a level of science fiction involved I have also seen some coders who exhibit a creative and unique ability to see what the code does. These programmers see the very lines of code as beautiful and through the code they see what the outcome is. They are able to experience the end result, what the application will do without ever seeing a user interface.

When a programmer is able to look at code and view the finished product through just the code I believe they see the code as beautiful. Good well-written code is indeed beautiful. I will often tell people the work we do at WebSpark is more than just make great applications. We make beautiful code. We see beauty in the code. And that makes us creative.

3. Programmers Are Uniquely Gifted

Code engineers are accustomed to several aspects of their work which make them unique. As you’ll find with other more well-known creative types – good creatives cannot be rushed. They work at the right speed to accomplish what everyone else will see as creative genius. They have a vision in their mind and they work towards that vision in a manner which suits them.

Secondly programmers focus on a variety of aspects of a system which to others may seem disparate and unrelated and yet in the end tie back in together perfectly to create the final product. I’ve watched firsthand as expert programmers will create a bit of code in one file, navigate three folders away to an unrelated file add additional code and then 30 minutes later will write an additional function which fits perfectly between the two and makes everything work together a cohesive whole.

Lastly, coders as other creative types work crazy hours (most of the time). You’ll find them up at all hours of day or night feverishly working on their ideas or projects. Sometimes its deadlines, but other times they do it because they love what they are doing. They are passionate about their work and work with a frantic excitement.


There’s no doubt in my mind that programmers are creative and they demonstrate this creativity through a number of ways. I’ve picked out just three (five if you count the last point as three separate ones). The next time you find yourself questioning a programmer’s ability to be creative stop yourself and think about this post. It’s possible you’ll find they are just as creative as another.

 

Multi-Tenancy and Servers

July 22, 2014
The Importance of Multi-Tenancy

This may be a bit more of a technical post then most of my posts but I think it’s an interesting topic and falls in-line with the purpose of my blog. Sharing relevant information and especially focusing on those items which will either improve your business or showcase possible future improvements in the tech industry. I’d like to share some details and specifics in regards to the concept of multi-tenancy in software application development.

I suppose with a technical topic like this it might be helpful to define what exactly is meant with multi-tenancy. Here’s the wikipedia definition:

Multi-tenancy refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple client-organizations (tenants).

Does that make perfect sense? If it’s still a bit unclear (or much much more than a little unclear) here’s a more detailed definition:

In a multitenancy environment, multiple customers share the same application, running on the same operating system, on the same hardware, with the same data-storage mechanism. The distinction between the customers is achieved during application design, thus customers do not share or see each other’s data.

Hopefully that definition helps to clarify more what the idea implies. So what does this mean in today’s technology scene and how does it affect the design patterns of your software application? Here are 3 ways multi-tenancy will be helpful to your application and 3 things you’ll need to do differently in development of your app. Let’s start with the 3 benefits.

Upgrades are Easier

When you create a multi-tenant application the upgrade process becomes significantly more simple. Instead of needing to update every instance of your software across a large number of servers you are able to update a single, central application or codebase and have the changes instantly available to all users. This obviously greatly simplifies the process of deploying new versions and the time involved in this process. Upgrades can also be considered to be the creation of new accounts as well. With a multi-tenant application the process for spinning up a new cloud and application is incredibly easy and can be done very quickly.

Customizations are Easier

The second benefit comes in the terms of customizations. In these types of applications you’ll obviously need to provide some level of customization for each user installation. Whether it be template changes, additional functionality, or merely a logo you will need to be able to deliver custom options. With multi-tenancy based applications you can provide an additional layer to allow for customizations while still maintaining an underlying codebase which remains constant for all users.

Ongoing Cost Savings

While there are many benefits to multi-tenancy applications I’ll provide only one more here. Rather than focusing on the benefits to cloud computing (there are many) I am going to instead mention that these apps provide significant cost savings over the alternatives. We talked about how multi-tenancy will speed up upgrades and save time (and also cost) but in addition the server / cloud requirements for a multi-tenancy application are much less. No dedicated set of resources must be configured and depending on your architecture can be a very minimal environment. The opportunity to save money takes many forms and becomes greater as the application scales up.

 


 

Ok, so that all sounds pretty good. In fact, it sounds insanely good. Too good to be true? Nope. But before you immediately drop everything you’re doing and begin to delivering multi-tenancy based applications here are a few things you need to consider when developing your apps. These are certainly not deal-breaking problems but rather some things you should plan for when beginning your development.

Infrastructure Complexities

There are of course infrastructure complexities which are inherent when developing a multi-tenant application. You must consider the way you wish to structure your application. Do you want to use a multi-database approach or a single database with a tenant ID field? Questions like these must be answered and often are unique to the environment of the specific application. There are of course positives and negatives to each approach thus it does depend on the situation. Other infrastructure ideas to consider involve the file system directly, the deployment methods, and the configuration options unique to each user.

Security Implications

Security is always of great importance when creating an application however when creating a multi-tenancy application this becomes a much greater priority. You are now creating an application which will be serving many users from a single codebase. This increased vulnerability as there comes a single failure point for many clients. Of course this can be mitigated through a variety of options but remains something to be considered. In addition, depending your database method chosen in the infrastructure planning you may have to plan for additional security methods to be employed to protect your database and eliminate cross-account failures.

Shared Codebases

The last of the three considerations I’d like to mention is the issue of a shared codebase. I list this as a benefit but it is also a consideration to be accounted for when creating your application. When you create your app are you going to share an entire application or will you give some level of unique filesystem to each cloud? Of course some of the main benefits of multi-tenancy is a single codebase, however, when doing this you have a number of options for how to create multi clients from a single codebase, symlinks or hard links or other methods (especially when considering unique elements such as configuration files, template or logo files etc…). You will need to plan for this when creating your multi-tenant applications.

As with the benefits section above the considerations section also holds many more little things to plan for beyond just the three I mention. But since I want to provide a bit higher level of information I’m limiting myself to only three of each. There may be better or more important ones you think of. If so, I’d love to hear about them.

 


 

As you can see there are a few development concerns to be taken into consideration when planning and building a multi-tenancy application however with a well documented plan and clear direction how you’ll handle a few key development decisions you can create a remarkable application capable of enormous benefits.

The concept of multi-tenancy has indeed been around for a while however with the growing focus on cloud computing and the ability to create cloud-based applications creating multi-tenant applications becomes far more important. I highly encourage you to do more research on this topic and consider using multi-tenancy for your next application. I believe if you wish to remain relevant in the coming technological advances we see in the space you will need to create multi-tenant applications.