Saturday, February 21, 2015

Case Study: TDD Is Worth It!

   One of my most popular blog posts, counting its reading elsewhere, is Is TDD Worth It?  Recently I had a great example of how, to quote myself, "It's almost like giving you guard rails."

   Suppose the application domain is as follows:
  • Things have Parts.
  • Parts have Types, which have Categories.
  • Within a given Thing, Parts are used with various specific Connectors.
   I had been asked to develop functions to find the Things that use a given Connector, and:
  • Parts of a given Type or Category.
  • Parts of any Type or Category out of a set of Types or Categories.
  • Parts of all the Types or Categories out of a set of them.
  • The negatives of all of the above, i.e., those using:
    • no Parts of that Type or Category,
    • no Parts of any Type or Category of a set,
    • and those not using Parts of all (though they may use some) Types or Categories of a given set.
   The problem is, I had been given these requirements one at a time, so I thought the client wanted a bunch of methods on Thing with names like:
  • find_with_connector_and_part_type(conn_type, part_type)
  • find_with_connector_and_any_part_types(conn_type, part_types)
  • find_with_connector_and_all_part_types(conn_type, part_types)
  • find_with_connector_and_part_category(conn_type, part_category)
  • find_with_connector_and_any_part_categories(conn_type, part_categories)
  • find_with_connector_and_all_part_categories(conn_type, part_categories)
  • find_without_connector_and_part_type(conn_type, part_type)
  • find_without_connector_and_any_part_types(conn_type, part_types)
  • find_without_connector_and_all_part_types(conn_type, part_types)
  • find_without_connector_and_part_category(conn_type, part_category)
  • find_without_connector_and_any_part_categories(conn_type, part_categories)
  • find_without_connector_and_all_part_categories(conn_type, part_categories)
   I kept the code as DRY as I could, making the negatives simply call the positives and invert the finding, and making the singles pass a single-element set to the "any" case. That still left a lot of structural duplication.  Since they were class-methods used as scopes, they all looked like this:
def self.find_with_connector_and_part_SOMETHING(conn_type,
  id_query = assorted_magic_happens_here(several_lines)
  where(id: id_query)
  # negative version: where("id NOT IN (?)", id_query)
   That got me thinking about how to combine this set of literally a dozen different functions into one, something like Thing.find_with_parts(part_options), where part_options would include what Connector, what Types (and whether we want all or just any, which would be the default), what Categories (ditto), and whether to invert the finding.  When the client later said they didn't in fact want a bunch of separate methods, I was ready, and had a lot of the idea already thought out.

   But... how do I get from Point A to Point B?  This is where TDD came in handy!

   Of course, I had been TDD'ing the original functions, using the classic "red, green, refactor" cycle.  (Actually, I use a variant that adds a step: "refactor the tests".)  So, I substituted a simple call to my proposed new function, for the guts of one of the old ones:
def self.find_with_connector_and_part_type(conn_type,
  self.find_with_parts(connector_type: conn_type,
                       part_types: [part_type])
and reran its tests.  Of course it failed, as I hadn't written find_with_parts yet... but that came pretty easily, based on the logic that had previously found Things having Parts of any of several Types, used with the given Connector.  That test quickly passed.

   Long story short, I followed this pattern, over and over:
  1. Substitute a call to find_with_parts for the guts of a specific method.
  2. Run its test.
  3. If it works, break it by changing find_with_parts!  You always want to start with a failing test!
  4. Fix find_with_parts to make the test pass.
  5. Run the rest of the tests!
  6. If any of them fail, go back to fixing find_with_parts.
When all the old methods had had their guts substituted, I knew the new method could do it all, so I ripped the old methods out.

   Had it not been for having an existing test suite, which I had because I had TDD'ed the original code, I would have had to be a lot more slow, careful, and methodical in that process.  Instead, I could just quickly try what came to mind, and see if it worked without breaking anything else.

   The resulting function, including some extracted functions to keep that code dry, runs to a mere 43 lines of code, and is structured in such a way as to make it very easy to add additional items to filter on, such as the Material a Part Type is made of, the color a given Part is painted when used in that Thing, etc.

   (Yes, yes, I could have created the test suite just before embarking on this... but seriously, what are the chances?  Most developers would not bother, for something smallish like this.  Perhaps for a larger code archaeology expedition, where writing a test suite to ensure continuing the current behavior, whether correct or not, is a common first step.)

Tuesday, December 2, 2014

HookLyingSyncer: gem to keep method_missing and respond_to_missing? in sync

   A while back, I wrote about the need to keep method_missing and respond_to_missing? in sync.

   (A brief refresher: in 2011, Avdi Grimm wrote a blog post about that.  In the comments, I wrote up a quick and dirty hack to do so, in very restricted cases, and then a still-dirty improvement, that unfortunately has since been mangled somehow.)

   At RubyConf 2014, Betsy Haibel spoke on Ruby metaprogramming, including that same need.  That inspired me to work on the concept again, taking a different approach (essentially a decorator) that I had briefly considered in those comments.

   The result is my new gem HookLyingSyncer.  (I was going to call it FirstResponder, but that name was already taken.)  The code is at  For now, the code looks like:
class HookLyingSyncer

  def initialize(object, matcher, &block)
    @object = object
    @matcher = matcher
    @block = block


  def respond_to_missing?(sym, include_all=false)
    matches = find_matches(sym)
    matches.any? ? true : @object.send(:respond_to?, sym, include_all)

  def method_missing(sym, *args, &blk)
    matches = find_matches(sym)
    if matches.any?, matches, *args)
      @object.send(sym, *args, &blk)

  def find_matches(sym)
    result =
    result ? result : []

   The tests contain some examples, with further usage explanation in the README.  Long story short, it can be used on instances and classes, to add or override method definitions, including overriding new so as to add methods to all new instances of a class.

   This is my first time actually making a gem, and I haven't done much with metaprogramming before, especially something that other people are going to use to do their metaprogramming.  So, any feedback would be greatly appreciated!

Thursday, November 6, 2014

Windows and Linux and Mac, Oh My!

   Someone recently asked in the Google Plus Ruby on Rails community: Which platform would be the best to use for Rails?  My answer got so long, and is so applicable to working in most other languages, that I decided to turn it into a blog post, so here it is, with a few minor edits.

===8<--- cut here ---

   Basically, any reasonably popular platform EXCEPT Windows.

   Windows (ignoring the Server variants) is made for the desktops of non-technical people (and people developing software specifically for Windows).  It comes with essentially zero development tools, and you have to pay for most of the serious ones, especially for the MS stack.  You can kinda-sorta fake a lot of Unix by installing Cygwin, but that's just a kluge and falls way short.

   Linux and other Unix variants such as BSD, Solaris, etc. are made for servers, and the desktops of VERY technical people.  They come with (optional on installation) lots of serious development tools, with many more easily available, most of them free.  Of these OSes, Solaris is pretty much dead, and BSD is very rare outside the server room (so there's nowhere near as many resources for help and education), and the others except Linux have very tiny market share, so let's focus on Linux.  However, Linux  has a bad reputation for requiring a lot of tweaking to get it to work reasonably well, especially if you're using hardware that is in any way not absolutely standard, such as a graphics card from within the past year or a maker that's not one of the top three.  This was reality, and why I switched to a Mac, in 2004, but I've heard Linux has gotten a LOT better about this since then, especially the Ubuntu "distro" ("what's a distro" is a whole 'nother question!), which (I've heard) places great emphasis on working right out of the box.  Linux is also free (though you can buy boxed sets with docs, support, and so on), and generally efficient enough to make good use of older PCs that won't run well under recent versions of Windows.

   A Mac is a reasonable compromise, at least as of when OSX first came out.  (Before then, it was aimed mainly at graphics people, like artists and people who put together print newsletters and magazines.)  The tooling situation is similar to Unix, except that it doesn't come with quite so many, and usually older versions.  It also doesn't require anywhere near as much tweaking as Linux does, because you're running it on exactly the hardware it was designed for.  It's even more consistent in its UI behavior and look-and-feel than Windows, and about as easy to understand -- but it's different, so you'll have a lot to "unlearn" if the Windows way of doing things is deeply ingrained in your habits.  It also used to be much more stable and secure than Windows, but MS has made great strides in their security and stability, catching up and, depending how you measure things, possibly surpassing OSX's security (not sure about stability).  On the other claw, it's a good bit more expensive than a bog-standard Windows box, never mind Linux, but frankly, they're so good that putting together a Windows PC with the same performance will usually cost about just as much, even before factoring in the cost of serious dev tools, OS upgrades (usually dirt-cheap or even free on a Mac), etc.  You can get some good bargains on a used Mac, one or two generations old; ask a Mac-using friend to sell (or even give, if you're lucky!) you one of his old castoffs, or take your chances on eBay.

Wednesday, July 23, 2014

Don't Cast Your Net So Wide!

   My current client, in my work as a freelance software developer, uses Jing screencasts heavily.  The QA analysts use them to demonstrate problems and how they test solutions.  I've started using them myself for demos of my implementations (for BA/PO approval) and bugfixes (for QA approval).

   But don't worry, this isn't going to be a heavily technical post, with programming and systems terminology thrown at you.  It's just going to be a bunch of tips I've figured out for making a good screencast that your viewers can easily follow.  These tips apply whether you're recording with Jing or any other tool, and hosting the video at like Jing offers, or anywhere else.

   First, sometimes people screencast a very large area, when they don't need to.  It might be the entire screen of their monitor, or a very large window that could have been made smaller.  This means that the viewers have to use a very large window themselves, or scroll around a lot, in order to see everything.  Maybe they don't actually have to see everything that's being shown.  That's a prime indication that the size should be cut down!  This becomes even more important with Jing-type videos (as opposed to YouTube), since you can't just click anywhere, or press the spacebar, to stop or pause it.

   But how can you cut down the size?  What I've been doing is to use a relatively small window (whether browser or terminal or whatever), usually as small as I can.  For what I've been doing lately, that's around 960x720, but I've done others at 800x600 and even 640x480.  (If you host your videos on YouTube, they have several standard sizes you can aim for, for the clearest picture.)

   You might need multiple windows, such as if your demo involves both a terminal or editor window where you're editing code or directly manipulating data in a database, and a browser window where you show the results.  In that case, stack them in the same area, so that the largest one completely covers the other(s).  If you need to have multiple of them visible at the same time, lay them out that way, within the size of the largest one.  If that's not possible, use another window that's there just to establish the size; you don't have to ever actually show it.

   Then, when starting your screencast, tell your software to record the area covered by the largest window.  With Jing, it's easy; upon telling it to Capture, it will give you crosshairs so you can lay out an area.  Instead of dragging an area, simply click on the window, and it will choose that as its area.  (It will also tell you just how wide and tall the area is, so you can get the size you want.)  If you're using something else, it probably has a similar feature; at the very least, it should let you switch which window is being recorded (so you might not need to stack them), maybe even following along with focus.

   Sound quality is also important.  If your voice sounds "distant", with your system's fans making a lot of noise, you may be very hard to understand.  The usual cause of this is using your laptop's built-in microphone.  That's fine for informal chatting, but for recording, where the listener can't just ask you to repeat something, it doesn't cut the proverbial mustard.

   At the very least, use some kind of external microphone.  You can use an old-fashioned one, or even the one on a pair of cell phone earbuds, plugged into your system's microphone jack -- if it has one.  If you opt for earbuds, just be careful about letting it rub against your shirt, as that will make noise.  Even laying it on the desk in front of you will probably be better than a laptop's built-in mic, as it will be further from the fans and closer to your face, but beware of typing noise.

   Better yet, use a headset, with a boom mic.  (Position the mic higher than the base of your nose, so you don't get breathing noises.  I usually put mine beside my cheekbone.)  If it's the kind with a fairly directional mic, "listening" mainly in the direction of your mouth, it will even help cut out some of the other random background noise.  Headsets can still come with the old-fashioned 1/8" plug, but USB headsets are everywhere nowadays.  You can get a decent-quality USB headset for well under $20, though you may find it worth splurging if you use it a lot, or need fancy features like a noise cancelling mic.

   (For about four years now, I've been using the cheap-seeming set that came for free with my Rosetta Stone order.  I've used it for an average of about half an hour a day in that time.  It's comfortable, very light, and gives very good voice quality -- as you'd expect, to feed into their speech recognition software.  The only problem that has developed is a slight looseness in the boom when within about 45 degrees of straight up.  You could probably find something of similar quality for the princely sum of $10 on eBay.)

   Then there's the whole matter of presentation skills.  On that, I'll mostly defer to Toastmasters International.  The few tips I will put in here are: plan out what you're going to say and do (typing, mousing, window switching, etc.), speak loudly enough to be heard easily but not painfully loudly, and don't drone on in a boring monotone.  Keep your sentences digestably short, and use vocal variety to emphasize the important points.  Remember, the listeners can't see you!  (Except of course if what you're screencasting is your camera feed, but that's a whole 'nother story.)

   After you've finished making your screencast, watch it yourself!  You might spot some parts where it might not be clear what you're trying to show (or even what you said), or where you make the watcher sit through an extended period of your floundering around trying to figure something out.  If so, take it as a practice run, and do it again.

Thursday, October 24, 2013

Vendor your Bundle

   Has this ever happened to you?

   You're working on a Rails project, that requires version X of some gem.  But you're also working on another project, that requires version Y.  If you use the right version on one, and ever clean things up, you break the other.  And if you ever need to look at the source of a gem, it's squirreled away in some directory far from the projects.  What to do, what to do?

   You could use rvm, to create a new gemset for each one.  After a few years of projects, you wind up with a gazillion gemsets.  And if you ever try to upgrade the Ruby a project uses, by so much as a patchlevel, you have the hassle of moving all those gems.  And the paths to your gems get even hairier.

   Alternative Ruby-management tools like rbenv and chruby might offer some relief, frankly I don't know... but I'm not holding my breath.

   But there's still hope!

   You could stash each project's gems safely away from all other projects, inside the project!  How, you wonder?

   When you create a new Rails app, don't just do rails new myproject.  Instead, do rails new myproject --skip-bundle.  This will cause Rails not to install your gems... yet.

   Now, cd into your project directory.  Edit your Gemfile; you know you would anyway!  When your Gemfile is all to your satisfaction, now comes the magic: bundle install --path vendor.

   What's that do, you wonder?  Simply put, it puts your gems down inside your vendor subdirectory -- yes, the same one that would normally hold plugins, "vendored" assets, etc.  There will be a new subdirectory in there called ruby, under which will be subdirectories called bin, bundler, cache, doc, specifications, and the one we're interested in here: gems.

   Now you can upgrade your gems without fear of interfering with another project.  You can also treat the project directory as a complete self-contained unit.

   But wait!  There's more!  As an extra special bonus, if you want to absolutely ensure that there is complete separation between projects (handy if you're a consultant, like me), you can even make these project directories entirely separate disk images!  For even more security, you can then encrypt them, without having to encrypt other non-sensitive information, let alone your whole drive.  Now how much would you pay?  ;-)

   In the interests of fairness, though, I must admit there is a downside to this approach: duplication of gems.  Suppose Projects A, B, and C all depend on version X of gem Y.  Each project will have its own copy.  For large projects, that can soak up a gigabyte of disk space each.  You can cut down on this by making the gems symlinks or hardlinks to a central gem directory... but why bother, in this age when disk space is so cheap?

   If you know of a better way, or more downsides to this, or have any other commentary, please speak up!  Meanwhile, go try it for yourself, at least on a little toy side-project.  I think you'll be please with the simplified gem-source access, and much looser coupling between projects.

   UPDATE:  Some people read the above and thought I meant to put the installed gems in your source code repository (e.g., git), so that the gems would be put into your staging and production environments from there.  This is a serious problem for gems with C-extensions to compile, if your staging and/or production environments are on different system types from your development environment.  That situation is very common, as many (if not most) Rails developers prefer Macs, and the production machine (and staging if any) is typically Linux, or occasionally one of the BSD family.

   This is not what I meant.  Instead, you probably want to put your gem directory (usually vendor/ruby) into your .gitignore file (or equivalent for whatever other SCM system you may be using), so that your SCM repo will ignore them.  Do still SCM your Gemfile and Gemfile.lock.  Then, when you install to staging or production, you will get the same versions of the same bunch of gems, but the C-extensions will still be compiled for the particular environment.

Sunday, October 6, 2013

Ruby Day Camp Review

   Some of you, especially locals, may recall that I got waitlisted for RubyDCamp, and decided to put on Ruby DAY Camp for us poor shlubs.  Some of you have been asking how it went.  So, forthwith, a review:

===8<---cut here---

   TL;DR: Ruby Day Camp was a qualified success, with a fun and educational time had by all of the handful who showed up.

The Good:

   A fun and educational time was had by all, with great camaraderie and surprisingly good lunch.

   We started with a couple rounds of code-retreat (doing the Game of Life), had lunch, and spent the rest of Saturday and Sunday as an unconference.  There were discussions on Ruby Gotchas, NoSQL databases, Big Data, freelancing, consulting, and other random topics, mostly at least somewhat Ruby-related.  We wrapped up with a four-way round of code retreat.  It started as a three-way rotation among the more advanced Rubyists, while the other advanced ones, and a few newer ones, watched on a large monitor.  A fourth advanced Rubyist joined us late in the day.  Since the even number would mean that each person would always be in the same role (writing either tests or production code), we decided to mix it up, with the next “player” chosen randomly (by a Ruby script).  All in all, a good weekend-ful of Ruby (and related) geekery.

   The weather was perfect, a bit cool in the morning, warming up to very comfortable in the afternoon, especially in the shaded shelter.  Just as with RubyDCamp, a great time to hold a semi-outdoor event.

   The venue was wonderful, a shelter at Pohick Bay Regional Park (about halfway between the Beltway and RubyDCamp), with good power (once we went to the ranger station and had it turned on), and grills.  The only downsides were a fairly long trek to the bathrooms, and lack of Metro access (but still better than RubyDCamp).  At least said bathrooms were fairly clean.

   Saturday’s lunch was a bit chaotic, but it all worked out.  Some brought something for themselves, but some did not, and needed to make a fast food run.  Some had brought enough to share, such as sausages heated up on the grills, using charcoal bought on-site.  More people brought things to share on Sunday, so we even had some bagels and cream cheese for breakfast, in addition to the sausages, hot dogs, and hamburgers for lunch.  There were even leftovers! It was a great example of a self-organizing agile team.  :-)

The Not-So-Good:

   Attendance was very low; even with a couple of walk-ins, we only had seven people, including myself.  There were several no-shows, some without notice.  This can be mitigated by starting to advertise it earlier, and maybe more widely.  The financial effects on the organizer (or whoever ponies up for the shelter reservations) can also be mitigated by starting off on a “donate to register” basis, rather than “register for free and we’ll pass the hat there”, as I had initially done.  Of course, full corporate sponsorship would help with both!

   The low attendance made it impossible to support multiple tracks, as I had hoped to have.  But even so, we managed (I think) to hold discussions that provided value for the advanced Rubyists without overly confusing the new ones.

   Scheduling got very sloppy.  We started late, ended early on Saturday (late on Sunday but that’s because we basically decided to keep going), and had long lunch breaks.  Some of this can be fixed by experience (so we know how long some things are likely to take, like lighting charcoal without lighter fluid), keeping a better eye on the time, writing out a schedule in advance, doing some things differently (e.g., get funding and bring lunch makings, or coordinate potluck, and publish an earlier start time), or by just “going with the flow”.  Making a map and giving directions might also help people arrive on time for the first day.

   The list of topics for agenda-bashing was a bit much.  That was my fault entirely.  I should have stuck to the usual way (having participants fill out cards for each topic they really wanted to talk about), rather than brainstorming and writing down whatever topics came to us.

   Only about half the expenses got covered.  Again, more time would probably mean more signups (which, since it was “donate to register”, meant donations), plus maybe more ability to get corporate sponsors.  We could also raise the suggested donation.  But, since the expenses were fairly low (about $400), it’s not a big deal.  (The suggested donation was $20.  Donations ranged from $1, from someone who didn’t show, to $50, from someone who did.  Most others donated at the suggested level.)

Next Time?

   Next year of course I’ll be trying again to get into RubyDCamp.  But if I can’t, I would consider doing Ruby Day Camp again instead.  At least I’ve got this year’s experience to build on.  If there are enough registrations (or corporate funding), we could even add perks, like food and maybe even overnight lodgings.

   If I do get a RubyDCamp slot, I’d still encourage someone else to run Ruby Day Camp.  I’d gladly help put it together, even if I wouldn’t be attending.

   Alternately, I could do it in the spring, or some other time that would not compete with RubyDCamp...  but then RubyDCamp waitlistees wouldn’t have an alternate activity, for a weekend they may have carefully blocked out on their calendars.  But then again, we’ve all got side projects we could be working on!

Monday, September 23, 2013

Is TDD Worth It?

   OOPS!  I was asked to flesh out an earlier version of this post, to contribute to the blog of one of my clients (Celerity IT).  In doing so, I messed up the existing post.  Rather than retrieve it from the dim dark depths of Internet history, I give you here the new version.  Meanwhile, they have further edited it and posted their version at

   There is some controversy among developers whether TDD (Test Driven Development) is really worth all the extra time it seems to take.

   To answer this, first we must define what TDD is!  Basically, it means developing a small piece of functionality by first writing a test for it, then code to make that test pass.  For instance, in making a job board, a piece of functionality might be "get a list of jobs with given text in the title".  So, you might write a few tests like "with an empty database, create a job with the title 'Java Developer', ask the Job class for all jobs with 'Java' in the title, and assert that I found that job", and the same but looking for 'Ruby' and assert that you didn't find it.

   (This is not to be confused with BDD, or Behavior Driven Development.  BDD is like TDD from a user's point of view, rather than a developer's.  It usually uses much more English-like language, so as to let non-technical stakeholders be involved.  This can help narrow the gap of understanding between them and the developers.  Many people do BDD for the broad overview and then TDD for the nitty-gritty internal details.)

   Most developers, however, go a bit further.  To most, TDD is a cycle of "red, green, refactor":
  1. Red: write a test, to test whether the code (that you haven't written yet) does what you want... and verify that it fails.  If it doesn't, then your test is meaningless!  (Writing good tests is an art unto itself, which I won't go into in this post.)

  2. Green: make the test pass... and keep the whole test suite passing!  If your code broke anything else, and you must now go fix the breakage, whether that means updating an outdated test, tweaking your new code, tweaking old code, etc.  You can't call it "green" until the whole test suite passes!

  3. Refactor: this is what makes it "go further".  To refactor a piece of code means to improve the internal design, without altering the behavior.  There have been many books written about this, so I won't go into detail; just know that, even above and beyond the benefits of the red and green parts, TDD practitioners feel a responsibility to clean it up.  If the way you got the test to pass was a horrible little kluge (admit it, we've all done it!), make it right before you check it in.
   So... does this take extra time, and is it still worth it?

   One of the dirty little secrets of TDD is that, yes, it will slow you down... in the very short term.  If you just want to get a feature implemented today, and don't care about tomorrow, you might be better off skipping testing, whether before or after coding.


   That would not be wise in the long run, or even the medium run.  You have to think of it as an investment.  (This pairs quite nicely with the notion of "technical debt".)

   TDD will help you get that feature to market even more quickly than skipping the tests, and with far better quality!  The process of getting a feature not just implemented but also to market allows enough time for bugs to be noticed, and have to be fixed.  And for other features to be added, that might interfere with this one.  And bugs to be noticed in that other feature, whose fixes might interfere.  And situations to crop up that you just didn't anticipate.

   THAT is where TDD will save your bacon!  The test suite, that you grow along the way, will help get those features implemented, and bugs fixed, without breaking other features.  It's almost like giving you guard rails.  If something does break, then the test suite will help pinpoint it, saving you hours of exploratory manual testing.  In the long term it will save the project hours of exploration, debugging, finger-pointing, and other such nonsense... and if you're really doing Test-DRIVEN Development, probably guide you to higher quality code in the first place, saving months of disentangling and re-implementation.

   But how does TDD do that?  First we have to define what we mean by "quality".  The two main things TDD will help with are, from a general standpoint, "it does what it's supposed to do" (including not having bugs), and from a geekier standpoint, "it has better internal design".

   The first part is obvious.  After all, that's what the tests prove.  But what about "better internal design"?  What does that even mean?  There are many aspects of software design, but TDD guides you to think in terms of small easily testable pieces.  This leads to code that is more reliable, modular, reusable, flexible overall, and a host of other benefits.  For this reason, some people are now claiming that TDD should stand for Test Driven Design rather than Development.  Perhaps our more DoD-minded colleagues will call it Test Driven Design and Development, or TDDD, or T3D for short, in much the same way they keep coming up with more C's to precede an I.  ;-)

   Of course, if you include the Refactor part of the cycle, that's another investment, one that usually pays off quite well in the long run.  Paying attention to proper design early will make it much more likely that the code will stand the test of time, lasting much longer before needing to be totally chucked out and rewritten.  We've all seen code so horrendous that we'd rather start over from scratch, rather than modify it -- don't be "that coder".

TL;DR: TDD does make it take longer to implement a feature... but not to get it
to market, and it yields much better code, saving even more time and expense later.