During my time at the Pittsburgh Post-Gazette, I had a chance to learn about some of the harsh realities that come with taking on yet another technology. The general idea was that even if it’s “free,” there is unfortunate baggage that comes with adding tools to the newsroom — baggage like increased overhead, learning curves, and brand new risks that have to be mitigated.
I hate to think that a newspaper can’t take advantage of free, open source, low hanging fruit simply because it would create another system that has to be taught and maintained! At the same time, though, I very much appreciate the position of the incredibly stretched-thin tech guys. This post is about better understanding the “why nots” that discourage newspapers from adopting new technology and trying to figure out if there is any hope of getting around them.
In hobby-land, there aren’t too many reasons not to play with technology. Nerdy people set up — or program from scratch — wikis, forums, and brand new systems just for fun all the time. Their peers are generally savvy enough to use the stuff without instruction and, although terribly sad, a “datapocalypse” won’t cost anyone their job. That’s probably why cutting edge is often discovered in the garage — garage dwellers have much less to lose.
Newspapers need to innovate the same way garages do, or at the very least reap the benefits of innovation, so why is it so hard for them to do so?
- Cost of Setup — I’ve mentioned legacy systems before; here is a place where they can constrain. If there is any chance that the new won’t play nicely with the old, then systems administrators have to take extra precautions lest they break something that’s being used. Worst of all, it’s always possible something will go wrong anyway.
- Cost of Implementation — Do databases or servers need to be set up? Does software have to be installed onto multiple computers? Do accounts have to be created for every user? What settings should be tweaked? Even if the software is free, getting it ready to use by a whole team of people might not be easy to do.
- Cost of Maintenance — I’m told nothing works perfectly, although I hate to believe it. When things that employees use break, someone has to take the time to fix them. Every new tool hosted in-house is another thing that could go wrong and take a day to fix.
- Cost of Backup — In the words of Tim Dunham, the CTO at the Post-Gazette, if a pet system becomes mission critical, it has to be treated like a mission critical system. This means that if a wiki containing organizational knowledge is set up and relied upon, it can’t ever go down and the data can’t ever be lost. Having a backup and recovery plan in place becomes essential, and that takes resources.
- Training and Learning Curves — After hearing some horror stories about even the most minor software changes causing confusion, it seems reasonable to expect trouble when pushing people to use something completely different from what they’re used to. How will everyone learn about it? Will they have to be taught? Is there documentation? Is the interface straightforward? And the answers to these questions will probably lead to more work.
- Functional Overlap — It is possible that the need that will be addressed by new software can already be partially addressed by the software that is already set up? In some cases this means that the new idea really isn’t worth the effort, but in others this line of thought might just be an inner excuse to avoid costs and risks associated with adopting new technology. Either way, you don’t want to maintain two systems for the same task because it will create miscommunication and general confusion.
The above list may seem depressingly long, but never fear! There are ways to fake flexibility and nimbly try new things. Although I can’t speak from experience, I’m going to throw a few ideas out there — take ‘em or leave ‘em.
- Start Small — Since the dawn of time, people have found ways to lessen the blow when dealing with large scale projects: developers make prototypes, web applications have closed betas, and cavemen probably made miniature wheels before trying full-sized ones. Not everything needs to be launched full-featured and full-scale up front. Pick a small group of people to try the new internal project first, or set up a smaller portion of the feature set, or just use default settings instead of spending hours tweaking to perfection. Doing these things will give you time to work out kinks, get feedback, and figure out how/if the new service might be used before you spin your wheels.
- Try a Web Service Instead — Using an external product is a risky idea for many reasons, and I wouldn’t suggest doing it for anything that involves trade secrets, information storage, or truly vital processes. But if it is possible to use a service someone already provides for free or for cheap, it’s worth giving it a shot. The positive is that you can probably get it up and running in less than an hour and you don’t have to worry about maintenance. The negative is that you lose control and increase risks — what if the service dies off? What if someone steals data?
- Create an Experimental Environment — You don’t need much computing power to host an internal service. Heck, my 9-year-old laptop could probably serve a wiki or chat server without much trouble. Set up a place where techies can try new things without being worried about important tools breaking or data being lost.
- Run an Inventory and Cut out the Fat — One thing that nobody wants is technological bloat — i.e. having more services than are needed. It makes things confusing for everyone and creates unnecessary maintenance overhead. Take some time to go over the services currently available and what needs they address; try to find opportunities for improvement, consolidation, or system retirement. If this means you need to temporarily move backwards to open up resources for new opportunities, so be it.
- Centralized Documentation — Having all documentation, guides, and general commentary in a central location is an incredibly useful way to share technical advice. If done right this will lower the time spent answering common questions and provide a nice way to communicate new information as new tools are launched. I would recommend a wiki for this, but there are definitely other ways.
- Automate and Streamline — Menial tasks in the tech room are probably already automated. For instance, hopefully backups aren’t being done each night by hand. If a chore is taking up too much time, try to automate it so that the whiz kids can work on bigger and better things. It is also worth thinking of ways to do this flexibly to save time down the road when additional steps need to be added.
The missing section of this post is a list of tools to try — I’m hoping that you guys and gals could help me fix that in the comments. The point, though, is that newspapers need to have the freedom to try new things in a way that doesn’t add much to their technical overhead. Regularly incorporating technology to improve day-to-day operations is incredibly important for the future of reporting.
It is also important to keep in mind that not everything attempted will work, and not everything will get used; just remember that if shots aren’t taken, nothing gets hit. The risk of an individual project failing — once again, many will — isn’t so dire if the setup costs are low. This is why starting small is so important — it allows you to throw a lot at the wall and see what sticks.
Seriously, though, any suggestions, success stories, or tool ideas would be greatly appreciated!Related