Thursday, October 24, 2013

Vendor your Bundle

   Has this ever happened to you?

   You're working on a Rails project, that requires version X of some gem.  But you're also working on another project, that requires version Y.  If you use the right version on one, and ever clean things up, you break the other.  And if you ever need to look at the source of a gem, it's squirreled away in some directory far from the projects.  What to do, what to do?

   You could use rvm, to create a new gemset for each one.  After a few years of projects, you wind up with a gazillion gemsets.  And if you ever try to upgrade the Ruby a project uses, by so much as a patchlevel, you have the hassle of moving all those gems.  And the paths to your gems get even hairier.

   Alternative Ruby-management tools like rbenv and chruby might offer some relief, frankly I don't know... but I'm not holding my breath.

   But there's still hope!

   You could stash each project's gems safely away from all other projects, inside the project!  How, you wonder?

   When you create a new Rails app, don't just do rails new myproject.  Instead, do rails new myproject --skip-bundle.  This will cause Rails not to install your gems... yet.

   Now, cd into your project directory.  Edit your Gemfile; you know you would anyway!  When your Gemfile is all to your satisfaction, now comes the magic: bundle install --path vendor.

   What's that do, you wonder?  Simply put, it puts your gems down inside your vendor subdirectory -- yes, the same one that would normally hold plugins, "vendored" assets, etc.  There will be a new subdirectory in there called ruby, under which will be subdirectories called bin, bundler, cache, doc, specifications, and the one we're interested in here: gems.

   Now you can upgrade your gems without fear of interfering with another project.  You can also treat the project directory as a complete self-contained unit.

   But wait!  There's more!  As an extra special bonus, if you want to absolutely ensure that there is complete separation between projects (handy if you're a consultant, like me), you can even make these project directories entirely separate disk images!  For even more security, you can then encrypt them, without having to encrypt other non-sensitive information, let alone your whole drive.  Now how much would you pay?  ;-)

   In the interests of fairness, though, I must admit there is a downside to this approach: duplication of gems.  Suppose Projects A, B, and C all depend on version X of gem Y.  Each project will have its own copy.  For large projects, that can soak up a gigabyte of disk space each.  You can cut down on this by making the gems symlinks or hardlinks to a central gem directory... but why bother, in this age when disk space is so cheap?

   If you know of a better way, or more downsides to this, or have any other commentary, please speak up!  Meanwhile, go try it for yourself, at least on a little toy side-project.  I think you'll be please with the simplified gem-source access, and much looser coupling between projects.

   UPDATE:  Some people read the above and thought I meant to put the installed gems in your source code repository (e.g., git), so that the gems would be put into your staging and production environments from there.  This is a serious problem for gems with C-extensions to compile, if your staging and/or production environments are on different system types from your development environment.  That situation is very common, as many (if not most) Rails developers prefer Macs, and the production machine (and staging if any) is typically Linux, or occasionally one of the BSD family.

   This is not what I meant.  Instead, you probably want to put your gem directory (usually vendor/ruby) into your .gitignore file (or equivalent for whatever other SCM system you may be using), so that your SCM repo will ignore them.  Do still SCM your Gemfile and Gemfile.lock.  Then, when you install to staging or production, you will get the same versions of the same bunch of gems, but the C-extensions will still be compiled for the particular environment.

Sunday, October 6, 2013

Ruby Day Camp Review

   Some of you, especially locals, may recall that I got waitlisted for RubyDCamp, and decided to put on Ruby DAY Camp for us poor shlubs.  Some of you have been asking how it went.  So, forthwith, a review:

===8<---cut here---

   TL;DR: Ruby Day Camp was a qualified success, with a fun and educational time had by all of the handful who showed up.

The Good:

   A fun and educational time was had by all, with great camaraderie and surprisingly good lunch.

   We started with a couple rounds of code-retreat (doing the Game of Life), had lunch, and spent the rest of Saturday and Sunday as an unconference.  There were discussions on Ruby Gotchas, NoSQL databases, Big Data, freelancing, consulting, and other random topics, mostly at least somewhat Ruby-related.  We wrapped up with a four-way round of code retreat.  It started as a three-way rotation among the more advanced Rubyists, while the other advanced ones, and a few newer ones, watched on a large monitor.  A fourth advanced Rubyist joined us late in the day.  Since the even number would mean that each person would always be in the same role (writing either tests or production code), we decided to mix it up, with the next “player” chosen randomly (by a Ruby script).  All in all, a good weekend-ful of Ruby (and related) geekery.

   The weather was perfect, a bit cool in the morning, warming up to very comfortable in the afternoon, especially in the shaded shelter.  Just as with RubyDCamp, a great time to hold a semi-outdoor event.

   The venue was wonderful, a shelter at Pohick Bay Regional Park (about halfway between the Beltway and RubyDCamp), with good power (once we went to the ranger station and had it turned on), and grills.  The only downsides were a fairly long trek to the bathrooms, and lack of Metro access (but still better than RubyDCamp).  At least said bathrooms were fairly clean.

   Saturday’s lunch was a bit chaotic, but it all worked out.  Some brought something for themselves, but some did not, and needed to make a fast food run.  Some had brought enough to share, such as sausages heated up on the grills, using charcoal bought on-site.  More people brought things to share on Sunday, so we even had some bagels and cream cheese for breakfast, in addition to the sausages, hot dogs, and hamburgers for lunch.  There were even leftovers! It was a great example of a self-organizing agile team.  :-)

The Not-So-Good:

   Attendance was very low; even with a couple of walk-ins, we only had seven people, including myself.  There were several no-shows, some without notice.  This can be mitigated by starting to advertise it earlier, and maybe more widely.  The financial effects on the organizer (or whoever ponies up for the shelter reservations) can also be mitigated by starting off on a “donate to register” basis, rather than “register for free and we’ll pass the hat there”, as I had initially done.  Of course, full corporate sponsorship would help with both!

   The low attendance made it impossible to support multiple tracks, as I had hoped to have.  But even so, we managed (I think) to hold discussions that provided value for the advanced Rubyists without overly confusing the new ones.

   Scheduling got very sloppy.  We started late, ended early on Saturday (late on Sunday but that’s because we basically decided to keep going), and had long lunch breaks.  Some of this can be fixed by experience (so we know how long some things are likely to take, like lighting charcoal without lighter fluid), keeping a better eye on the time, writing out a schedule in advance, doing some things differently (e.g., get funding and bring lunch makings, or coordinate potluck, and publish an earlier start time), or by just “going with the flow”.  Making a map and giving directions might also help people arrive on time for the first day.

   The list of topics for agenda-bashing was a bit much.  That was my fault entirely.  I should have stuck to the usual way (having participants fill out cards for each topic they really wanted to talk about), rather than brainstorming and writing down whatever topics came to us.

   Only about half the expenses got covered.  Again, more time would probably mean more signups (which, since it was “donate to register”, meant donations), plus maybe more ability to get corporate sponsors.  We could also raise the suggested donation.  But, since the expenses were fairly low (about $400), it’s not a big deal.  (The suggested donation was $20.  Donations ranged from $1, from someone who didn’t show, to $50, from someone who did.  Most others donated at the suggested level.)

Next Time?

   Next year of course I’ll be trying again to get into RubyDCamp.  But if I can’t, I would consider doing Ruby Day Camp again instead.  At least I’ve got this year’s experience to build on.  If there are enough registrations (or corporate funding), we could even add perks, like food and maybe even overnight lodgings.

   If I do get a RubyDCamp slot, I’d still encourage someone else to run Ruby Day Camp.  I’d gladly help put it together, even if I wouldn’t be attending.

   Alternately, I could do it in the spring, or some other time that would not compete with RubyDCamp...  but then RubyDCamp waitlistees wouldn’t have an alternate activity, for a weekend they may have carefully blocked out on their calendars.  But then again, we’ve all got side projects we could be working on!

Monday, September 23, 2013

Is TDD Worth It?

   OOPS!  I was asked to flesh out an earlier version of this post, to contribute to the blog of one of my clients (Celerity IT).  In doing so, I messed up the existing post.  Rather than retrieve it from the dim dark depths of Internet history, I give you here the new version.  Meanwhile, they have further edited it and posted their version at

   There is some controversy among developers whether TDD (Test Driven Development) is really worth all the extra time it seems to take.

   To answer this, first we must define what TDD is!  Basically, it means developing a small piece of functionality by first writing a test for it, then code to make that test pass.  For instance, in making a job board, a piece of functionality might be "get a list of jobs with given text in the title".  So, you might write a few tests like "with an empty database, create a job with the title 'Java Developer', ask the Job class for all jobs with 'Java' in the title, and assert that I found that job", and the same but looking for 'Ruby' and assert that you didn't find it.

   (This is not to be confused with BDD, or Behavior Driven Development.  BDD is like TDD from a user's point of view, rather than a developer's.  It usually uses much more English-like language, so as to let non-technical stakeholders be involved.  This can help narrow the gap of understanding between them and the developers.  Many people do BDD for the broad overview and then TDD for the nitty-gritty internal details.)

   Most developers, however, go a bit further.  To most, TDD is a cycle of "red, green, refactor":
  1. Red: write a test, to test whether the code (that you haven't written yet) does what you want... and verify that it fails.  If it doesn't, then your test is meaningless!  (Writing good tests is an art unto itself, which I won't go into in this post.)

  2. Green: make the test pass... and keep the whole test suite passing!  If your code broke anything else, and you must now go fix the breakage, whether that means updating an outdated test, tweaking your new code, tweaking old code, etc.  You can't call it "green" until the whole test suite passes!

  3. Refactor: this is what makes it "go further".  To refactor a piece of code means to improve the internal design, without altering the behavior.  There have been many books written about this, so I won't go into detail; just know that, even above and beyond the benefits of the red and green parts, TDD practitioners feel a responsibility to clean it up.  If the way you got the test to pass was a horrible little kluge (admit it, we've all done it!), make it right before you check it in.
   So... does this take extra time, and is it still worth it?

   One of the dirty little secrets of TDD is that, yes, it will slow you down... in the very short term.  If you just want to get a feature implemented today, and don't care about tomorrow, you might be better off skipping testing, whether before or after coding.


   That would not be wise in the long run, or even the medium run.  You have to think of it as an investment.  (This pairs quite nicely with the notion of "technical debt".)

   TDD will help you get that feature to market even more quickly than skipping the tests, and with far better quality!  The process of getting a feature not just implemented but also to market allows enough time for bugs to be noticed, and have to be fixed.  And for other features to be added, that might interfere with this one.  And bugs to be noticed in that other feature, whose fixes might interfere.  And situations to crop up that you just didn't anticipate.

   THAT is where TDD will save your bacon!  The test suite, that you grow along the way, will help get those features implemented, and bugs fixed, without breaking other features.  It's almost like giving you guard rails.  If something does break, then the test suite will help pinpoint it, saving you hours of exploratory manual testing.  In the long term it will save the project hours of exploration, debugging, finger-pointing, and other such nonsense... and if you're really doing Test-DRIVEN Development, probably guide you to higher quality code in the first place, saving months of disentangling and re-implementation.

   But how does TDD do that?  First we have to define what we mean by "quality".  The two main things TDD will help with are, from a general standpoint, "it does what it's supposed to do" (including not having bugs), and from a geekier standpoint, "it has better internal design".

   The first part is obvious.  After all, that's what the tests prove.  But what about "better internal design"?  What does that even mean?  There are many aspects of software design, but TDD guides you to think in terms of small easily testable pieces.  This leads to code that is more reliable, modular, reusable, flexible overall, and a host of other benefits.  For this reason, some people are now claiming that TDD should stand for Test Driven Design rather than Development.  Perhaps our more DoD-minded colleagues will call it Test Driven Design and Development, or TDDD, or T3D for short, in much the same way they keep coming up with more C's to precede an I.  ;-)

   Of course, if you include the Refactor part of the cycle, that's another investment, one that usually pays off quite well in the long run.  Paying attention to proper design early will make it much more likely that the code will stand the test of time, lasting much longer before needing to be totally chucked out and rewritten.  We've all seen code so horrendous that we'd rather start over from scratch, rather than modify it -- don't be "that coder".

TL;DR: TDD does make it take longer to implement a feature... but not to get it
to market, and it yields much better code, saving even more time and expense later.

Thursday, September 12, 2013

Pluck Your Colon, or, Concisifying Your Ruby on Rails Code

   Recently I encountered some Ruby code that looked like:
ids = holder.things.collect { |thing| }
(I prefer to say map rather than collect, but they're really the same thing.  Which one you use is largely a matter of taste, influenced by what languages you've used in the past, and your laziness in typing.)

   There are two small successive improvements that can be made to this.  First, when you have any code of the form:
bunch_of_things.collect { |thing| thing.some_method }
(and remember, retrieving a data-member of an object is a method!) you can shorten that to:
   This uses the & shorthand for Ruby's to_proc method.  Long story short, the : makes a Symbol, and the & calls to_proc on that.  collect will send that to each item in turn, making it behave just like a block explicitly calling it on each item.  (I won't go into the nitty-gritty details here of how that works; if you care, investigate Ruby's yield keyword.)

   For example, if you have a block of numbers and you want to get their even-ness, you can do:
[1, 2, 3, 5, 8].map(&:even?)
# => [false, true, false, false, true]
  You can also use the &: trick with block-taking methods other than collect/map, such as inject/reduce:
[1, 2, 3, 4, 5].inject(&:+)
# => 15
though of course inject will want a method that takes an argument.  (Why this is so, is left as an argument for the reader.)

   Sometimes you can omit the &.  I'm not sure exactly what the rule is, or even if there is one.  At the cost of one more character, you may as well just always use it.

  Back to our original code, though, there's another trick we can use to simplify this.

  ActiveRecord provides a method called pluck... and we were indeed using ActiveRecord.  pluck sets the SQL SELECT statement to retrieve only the columns you want.  The result is an array of the values ready to be used by your program.  (If you give it more than one column to pluck, the values are themselves arrays.  However, in this case, as in the vast majority, we were only interested in one column.)  Not only does this often make the results easier to deal with, it can also help deal with a large dataset by saving i/o between the database and your application, memory on both ends, etc.

   So, rather than go through the hoops of retrieving the things associated with holder, and then looping through them to extract the id column, this could be written more simply as:
ids = holder.things.pluck(:id)
   What are some of your favorite Ruby (or Rails) idioms for making common code more concise (short but still clear)?

Sunday, August 25, 2013

The Big Bad Bang, or, The OTHER Gotcha with Ruby's Bang Methods

   A few people have asked me why certain Ruby methods end in an exclamation mark (!), commonly known in programmer shorthand as "bang".  Examples include upcase! (to get the all-uppercase version of a string) and uniq! (to get the unique elements of an array).  Long story short, the bang means that you should use it with caution.

   Usually this is because it modifies the object passed and returns it, as opposed to returning a modified copy of it.  (In Ruby on Rails and many similar frameworks, this may also be because it will throw an exception if anything goes wrong.  However, we will focus on core Ruby methods.)  I'll show you another reason in a moment, but for now, let's just examine the normal usually-expected behavior.  For instance:
  str = 'foo'
  p str.upcase
  p str
will output FOO and then foo.  While upcase returned the uppercased version of str, it did not modify str.  On the other claw, if we add a bang, doing:
  str = 'foo'
  p str.upcase!
  p str
we get FOO and then FOO again!  In other words, upcase! returned the uppercased version, just as the non-bang version did, but it also uppercased str itself!

   Similarly, if we use uniq:
  arr = [1, 3, 3, 7]
  p arr.uniq
  p arr
we get [1, 3, 7] and then [1, 3, 3, 7], showing again that the non-bang version returned the unique values within arr, but did not modify arr, whereas if we add a bang and do:
  arr = [1, 3, 3, 7]
  p arr.uniq!
  p arr
we get [1, 3, 7] and then [1, 3, 7] again, showing that arr itself was modified this time.

   So far so good.

   But wait!  There's more!  There's another big bad gotcha waiting to getcha!

   Do not depend on the bang versions returning the same value as the non-bang versions!  (Even though that value seems to be the whole point of both functions!)

   In the specific cases above, yes they do.  But let's look at what happens if the variable is already how we want -- in other words, if the string is already all uppercase, or the array already has only unique values.  If we do:
  str = 'FOO'
  p str.upcase!
  p str
then, as expected, since it already fit our needs, str is unchanged.  But look at str.upcase! -- it's nil!

   Let's see what happens in the numeric case.  If we do:
  arr = [1, 3, 7]
  p arr.uniq!
  p arr
then, just as above, arr is unchanged... but arr.uniq! is nil!  How come?

   Long story short, standard Ruby bang methods often return nil if no change was needed.

   Worse yet, even that is not completely consistent.  When using any bang-method that you are not already very familiar with, be sure to RTFM.

Thursday, June 27, 2013

Pull Request Roulette gets some love!

   You may recall my earlier mention of my latest silly little side-project,  I decided it was actually useful enough to publicize it a bit more widely, and submitted it to the Ruby 5 podcast, and Peter Cooper over at Ruby Inside.  I didn't actually expect anything to come of it, or maybe fifteen seconds of airtime at most, but yesterday's episode of Ruby 5 mentioned it, for what felt like maybe a full minute! One down, well spent, fourteen left.  :-)  (Unless you also count my being mentioned twice on the June 22 edition of Paul Harris's "Knuckleheads in the News" -- as a submitter, not a knucklehead!)

   They mentioned the horrible color scheme.  Quickly, along came Matthew Burket with a pull request to add Zurb Foundation and do some much nicer styling.  (I've been meaning to check into Zurb Foundation, as it's lighter-weight and less-overused than Twitter Bootstrap.  This might be my chance to learn it.  Unfortunately I'm a bit busy at the moment and about to go on a vacation.)  And exactly as Ruby 5 had suggested, he then submitted that pull request to PullRequestRoulette.

   Peter Cooper hasn't covered it yet (it's only been a few days, so no rush), but he did email me back, saying "It might be a StatusCode thing also."  So, watch for it there.  Makes sense, since it's not really specific to Ruby, though it is written in Ruby 2.0, with Rails 4.0.

   Last night, I was also tapped at the next-to-last minute to talk about it at Arlington Ruby Meetup, where David Bock was to be "talking about all the things".  At the very last minute (I was to be the second to last speaker), since things were running a bit late, I was asked to yield my time to Dave Thomas.  (No, not the Wendy's guy, but one of the two original Pragmatic Programmers.)  I retained one minute (doesn't count as fame since it was only a smallish roomful of people) to tell people the basic concept and the URL, which led to a mention on Twitter from Jason Wieringa, retweeted by Chana (who doesn't reveal her last name there so I won't here, JIC she values her privacy).

   Yes, that Dave Thomas was there!  (Talking about Elixir.)  Squeeee!  I didn't get to chat with him directly, tho I probably could have were I a bit more assertive during the usual after-party at Northside Social.  A few people mentioned having learned Ruby from him.  I hadn't, but you may recall the very earliest entries in this blog being solutions to his Code Katas.  And of course I've read The Pragmatic Programmer, which he co-wrote with Andy Hunt.

   David Bock said he thought PullRequestRoulette could be very useful and successful, just needing some personality and momentum.  Matthew Burket's pull request will help with the personality, please help with the momentum!  Add a pull request, commit to reviewing one, and spread the word!


Wednesday, May 1, 2013

Ruby Gotchas

   How could I possibly have forgotten to post this here?!  I gave a talk last night at the Northern Virginia Ruby Users Group, on Ruby Gotchas.  Not only that, but it was a expanded repeat of the one I had given a couple months ago at the Arlington Ruby Meetup.  (I had offered to NoVaRUG first, but business travel kept interfering.  By the time NoVaRUG had scheduled it again, ARM had also scheduled it.)  The slides are available at

Saturday, April 27, 2013

And now, a bit MORE Heroku confusion!

   Yes, I wrote earlier on how to have a lack of confusion on Heroku.  But sometimes, things can still be a little confusing.  I recently decided to play with Ruby 2, Rails 4, and a new app idea, all at the same time, via Heroku, launching Pull Request Roulette.

   All went well in development mode, trying it out on my own box.  Likewise in test mode, where I was trying to follow fairly strict BDD and TDD.  In production mode, though, and only in production mode (where it was most embarrassing and troublesome, of course), the "Take" feature, whereby someone commits to review a Github "pull request", wasn't working.  It was acting like I was trying to view the record of the pull request, which (at this stage) consists only of the URL.  It's all the worse because there are only two basic features, Submit and Take.

   Production mode on my own machine... showed the same symptoms.  Aha!, I thought, at least it's not Heroku-specific.  Tracking down Heroku problems can be a pain, due to the layer of indirection.  The logs (more easily accessible on my own machine) pointed to an asset problem.  Heroku precompiles on upload (unless disabled of course), so that wasn't it.  What could it possibly be?

   I Googled a gaggle, and it didn't bring me much giggles, especially after spending enough hours to develop late-night-goggles.  Finally I stumbled across a post on StackOverflow that referenced Heroku's documentation about Getting Started with Rails 4.x on Heroku.  I could have sworn that I had read that, and followed all its advice when setting up Pull Request Roulette initially... but the referenced Heroku's instructions to include some gems that didn't sound familiar.

   Sure enough, when I opened up the Gemfile, they were nowhere to be seen.  Adding them fixed the problem.

   So now, if you want to have someone review a pull request you've put together for an open-source project on Github, someone can now commit to review it.  For now, though, you won't know who it was, and there will be no record.  Want to add that feature to the project?  Fork it and create a pull request... and you can submit that one to Pull Request Roulette.

Tuesday, April 9, 2013

Even LESS Heroku confusion!

   Way back when, I wrote about easily using Heroku for both staging and production versions of your web site, without much confusion over which was the just plain "heroku" remote, by explicitly designating them as "staging" or "production" instead.

   Now, to make it even simpler, I've written a tiny little script, that lets me just do "script/deploy staging" or "script/deploy production".  Another advantage of using a script is that I can do additional mundane overhead tasks in there.  In this case, I turn Heroku's maintenance mode on before pushing, and off afterward.
#! /bin/sh



if [ -z "$SITE" ] ; then
  echo "Must specify site (staging, production, etc.)"
  exit 1

heroku maintenance:on --app $APP-$SITE
git push $SITE $BRANCH
heroku maintenance:off --app $APP-$SITE
   To adapt it to your own site, simply adjust the APP and, if needed, BRANCH vars.  (If you want to get fancier and deploy the master branch to the production remote and the develop branch to the staging remote, that's a whole 'nother story... but should be pretty trivial to code up.)

Friday, March 29, 2013

All Right, Break It Up!

   On a recent project, I worked on a very large Ruby on Rails application.  My temporary employer  joined the project after it was well underway.  It was composed of four "portals", three of which were already mostly written, and fairly large and complex.  Our work was mainly (and mine was exclusively) on the fourth, at the same time as the client was working on it too.

   An app composed of four clearly defined parts could be written as four, or at least three, separate Rails engines mounted on one app.  Or possibly as four separate apps with a library of shared code, all accessing the same database.  Or if the four parts are relatively small, one application that includes them all.  In this case... it was monolithic, but not of small parts at all.  We repeatedly recommended breaking it up, but the client just didn't want to.  Funny thing about consulting is, the client is always right -- not just even when he's wrong, but especially when he's clearly, provably, absolutely flat-out dead wrong.

   "So what?", you might be wondering.  "What good would breaking it up do, aside from pleasing the ivory-tower purists?"

   The problem is, this has quite an effect on the rate of progress.  Even if the engineering of the code itself is very clean, so you don't have god-objects causing churn and interference in the actual code (as the client did), and you have excellent developers making rapid progress in developing features (as, thankfully, both the client and we did, he said ever so humbly)... look at what happens when someone's just trying to commit a feature.  The skill level of the people involved doesn't matter at all at this stage, it's just pure statistics.

   Suppose you are on a team of about twenty developers, making a web app to run a school, with different portals for teachers, administrators, students, and parents.  You and a colleague pair program on the next highest-priority ticket from the backlog in the issue tracker.  It turns out to be in the parent-portal.  You create a feature-branch, off the master branch.   You get the feature passing the automated acceptance tests given in the user story, pull the latest master, merge it into your feature-branch (resolving any conflicts), and run the whole test suite.  It passes, so you didn't break anything.

   Now what?  Merge into master?  You try... and get rejected by the revision control system because someone committed changes to master while you were testing.  Not surprising, as there are twenty of you, all hard at work, and maybe some of them aren't pairing so there are even more than ten features under active development... and the test suite takes an hour to run.  The changes were in a different portal, so unlikely to interfere... but better safe than sorry.  Nobody wants to be the jerk who broke the build, or even the Continuous Integration server.  So you pull the latest master, merge it into your branch,  and run the test suite again.  Still green, so your changes don't break anything that was merged into master during your previous test run.  Nice to know, but it means that one of the test runs was a total waste of time.  Maybe you used it for something else productive... or not.  Either way, it's nearly certain that your mental house of cards concerning that feature has utterly collapsed, because you've been thinking about something else.

   So now, insert the above paragraph, again.  Maybe you pull some fancy tricks like parallelization and extensive mocking and stubbing of slow services and expensive object creation and so on, and cut the time down to fifteen minutes.  That's still plenty enough time for it to happen again.  And again.  Lather, rinse, repeat, ad nauseam, which gets to feeling like ad infinitum... especially to your client, who is breathing down your neck, waiting for this feature.

   Nobody's happy, not him, not your boss, and certainly not you.  You're a good developer, you want to be productive, you believe in testing... but you hate the inane futility of having to do it over and over in vain.

   Now let's consider what it might be like if the app had been broken up into separate engines or apps.  Even assuming a fifth item (an app to mount engines on, or a library of code shared among apps), the size of the codebase you need to deal with at one time is still cut down to 40% (assuming all pieces are of equal size).  You could possibly make it 20%, but let's even assume the worst (within the assumptions already made).  How would that help with this problem?

   First, only 20 to 40% of the changes being worked on (assuming equal distribution of current work) are likely to block your code from being accepted by version control.  (You're probably only working on one of the apps/engines, and maybe the library or main app but usually not.)  Call it 30% to keep it simple.  This means only 30% of the probability that any given change, to the overall system, will make you run the test suite again.

   But wait!  There's more!  (Or rather, there's less!)  The time it takes to run the test suite, should also get cut to about 40%.  (Less when you don't change anything the shared code depends on (so you only have to run one portal/app's tests), same when you do (that plus the shared code's tests), more when you change the shared code (everything).  Call it a wash to make the math easy.  Still certainly a smallish fraction.)

   Not only is that an instant time-savings right there, and not only does it make it easier for you to stay mentally on-task in case some more changes are needed, but:

   Combine these two factors, and they mean you have only 12% of the original probability of an interfering change happening during your tests, making you run them again.  Suppose the original probability was 75%, so that you had to run them again 3/4 of the time.  Your new probability is 9%, only about 1 in 11.  Since that's 3/4 versus 1/11 of all test runs, including the ones you did as repeats, that means a drastically shortened chain of tests and retests, with shorter links and vastly fewer of them.

   Now let's finally pile on all the other benefits.  Of course better separation of concerns would lead to cleaner design, for easier extension, maintenance, and repair.  But it would also ease some of the other pains we were having.  If people were devoted to just some particular portal, and maybe permitted to mess with the share code, they could separate the backlogs, and the source repositories, and so on.  That means only 20-40% as much email flooding their inboxes about the latest feature additions, bug reports, pull requests, and so on.  They might even turn their email notifiers back on, so they can find out about real emergencies in a reasonable amount of time.  They might be less tempted to skip testing.  They might be able to develop deep expertise on a piece of the project, instead of knowing just enough about all of it to be dangerous.  They might not abandon the project (or the entire company) in frustration.  And on and on.

   So now it's your turn, dear reader.  Have you worked on a megalithic app, with obvious seams to break it apart at?  Did you do it?  If so, what benefits did it bring -- what pains did it ease, what pleasures did it bring?  If not, why not?  Either way, what eventually happened?