Today I’m starting to re-purpose old content into an email course aimed at designers and front-end developers. I’ll work on this in the coming days, for instance they need top and tailing, and all content is draft, but I wanted to get the majority of the structure down.
I’ll also likely merge some sections and make the content more friendly.
1. Pitching technical projects
2. The benefits of roadmapping
3. Workflows: Version control
A version control system plays an essential role in storing and tracking changes to your codebase. As you’ll see in a later email it is also a key component of a consistent, trouble-free development, testing & production hosting environment.
Even if you aren’t using distributed development team, using a version control system will pay significant dividends in terms of securing your code, handling conflicts, and facilitating troubleshooting.
These days the most popular system is called Git, which you’ve probably already heard of. It can be used for free on pretty much any platform. If you choose Git then we would recommend one of the collaborative wrapper services which add a ton of value to the experience.
[more needed here]
4 – A Consistent Server Architecture Across Development, Testing, & Live
This is arguably the single most important takeaway from this course. If you don’t already have a consistent development, test & production environment, then you’ll always be fighting an uphill battle.
Systemising your deployment workflow and ensuring that each environment is the same is essential if you want to remove the headaches and inefficiencies as well as increase your hosting cost and performance options.
[more needed here]
5 – Portable Development Environment Baked into Your Codebase
The primary building block of a contemporary development workflow is a development environment that closely– ideally, exactly–matches your production
environment utilising virtual machines and tools like Vagrant or Docker.
These tools use either application-level (Vagrant) or operating system-level (Docker) virtualization to build a standardised environment complete with the correct operating system, isolated file system, standard libraries, and any other dependencies you want to be available for developers. Because Vagrant and Docker run on a very wide range of operating systems, it’s extremely easy for all developers to standardise on a single box (Vagrant) or container (Docker), which really means standardising on an identical development environment no matter how the development or testing host computer is actually configured.
The power of using a tool like Vagrant happens when you apply version control to the simple text configuration files that define your Vagrant development environment. You don’t have to store a 10GB binary ISO disk image in version control, just a few simple text files, and any developer with a network connection and Vagrant installed can get an identical machine up and running locally.
Vagrant’s config files (called Vagrantfile) is what makes a consistent development environment even in a diverse, distributed work environment not just possible, but easy.
- Learn more about Vagrant
- Learn more about Docker
- Vagrant GUI tools:
- Vagrant Manager for OS X or Windows
- PuPHPet, a simple hosted web GUI to set up a Vagrant environment for Web development
- Vagrant Quick Start Tools:
- VVV, a configuration framework for WordPress development on Vagrant
- perch-vagrant, a configuration starter for Perch development on Vagrant
- Vagrantbox.es, a list of publicly available Vagrant boxes.
6. Provisioning Tools For All Environments
When building your development environment using Vagrant or Docker, the best way to specify what should go onto the machine is to use one of the supporting scripted configuration tools like Ansible, Chef, or Puppet.
Scripted provisioning is often a very large step towards reducing the cost and risk of (re) building new servers. You get to skip days of repeated hand-configuration effort every time a new developer starts, and you gain cost and operational flexibility along with an easy hosting upgrade or downgrade path if your needs change.
Scripted provisioning is an essential factor when making your test & live hosting environments more stable and consistent.
7. A Mature Test Environment
The lack of a useful test environment is the cause of too much chaos during the run-up to deploying new code. The beauty of using a portable development environment like Vagrant or Docker is it becomes much easier to reuse the development environment configuration when creating a test environment.
When provisioning a test environment, you do not necessarily need to use the same hosting company or level of hosting “horsepower” as your live environment uses. In fact, to keep costs low, you may opt for an on-demand test environment instead of a dedicated server, or a less powerful test environment.
That doesn’t mean you can get away without a well-configured test environment. The same issues that make Ansbile or Chef such a great way to configure a consistent, portable development environment make those tools useful for configuring a test environment.
However you configure it, the presence of an identical test system is a huge step forward when building a professional development workflow.
[Merge into 4]
8. A Flexible & Scriptable Production Hosting Platform
While you may need dedicated servers for some or all of your web applications, using a more dynamic hosting environment based on Amazon Web Services, Google Cloud, or similar provider can help you respond more quickly to changing application usage.
Especially with a testing environment, being able to provision and turn down servers at will helps keep costs reasonable without adding significant overhead to your testing workflow.
The best bit is if you are already using scripted provisioning tools then you already have the ability to provision the new servers when they are built, but they still need creating in the first place and that’s where a decent API comes into place.
Most enterprise-grade hosting companies have an API that allows you to script ‘spinning up’ new servers on the fly that you can then provision with your Ansible or Chef build scripts. This is particularly useful if you have demand spikes or if you’re looking to save money. You might use on-demand test servers, simply bringing them up after a code check-in.
9. A Continuous Integration (CI) System
A CI system brings all the parts of your development workflow together. Now that you have a version control system and multi-environment scriptable server provisioning, you need something to trigger multiple things to happen simultaneously when your underlying code changes. This allows you to check that all is well.
A typical use-case for CI is for the test branch of your code. Your developer checks in to the test branch, your CI system is notified automatically (or it has been sitting there listening), and it triggers a new build. This could include:
Checking out the code and running automated test & profiling tools against it
If it passes, spin up a new test server, then deploy it
Then run automated load test against the new test server
Regenerate your code’s API documentation and also run the profiler against it to see where bottlenecks might appear
You might also run routines to ensure that coding standards have been adhered to
The longer your developers go between code check-ins, the more painful those check-ins are, along with the subsequent build and test process. CI combines more frequent check-ins with automated unit testing, QA checking like profiling, and even support processes like bringing a test environment online, documentation updates, staff notifications, and so on.
When you start using CI, start small and move slowly. CI is both a workflow and a toolset change, and it can get messy if you move too fast. Testing is the most obvious place to begin using CI, and you might use CI for staging and production only after getting comfortable with CI in a testing environment.
- Jenkins: This is the CI system we use at Siftware, but there are many others.
- This Wikipedia article is a good start.
- Beanstalk also offers CI features.
10. A Ticketing System
Email… is not a ticketing system. Neither is a spreadsheet, really. A good ticketing system allows you to track customer and internal development issues with just the right amount of metadata to help prioritize and filter the detail.
You need either an in-house or hosted ticketing system that you control because:
- You don’t want all that customer support information walking out the door with a contractor or other vendor.
- To provide good support with high efficiency, you need something more suited to the purpose than email or a spreadsheet.
There are excellent hosted and self-hosted ticketing system options. In fact, if you’re using Github or Bitbucket for version control, you have a built-in ticketing system. Make use of it!
- Gitlab: Includes issue tracking, version control, and other functionality
- Trac: Focused on issue tracking
- Github: Includes issue tracking, version control, and other functionality
- Bitbucket: Includes issue tracking, version control, and other functionality
- Beanstalk: Includes issue tracking, version control, and other functionality
- Jira: Focused on issue tracking
- Fogbugz: Focused on issue tracking
There are two ways you can use documentation to reduce the cost of developing and maintaining your web applications. And before you say it, no, I’m not about to tell you to “document your code!” Yes, of course, you should do that.
You should also document:
- Your procedures and standards for version control
- Change management procedure
- Coding standards
- Deployment strategy
- Security policies
- phpDocumentor: Creates reference document automatically from your PHP source code.
- The PSR-1 and PSR-2 coding standards provide a good example to base your PHP coding standards on.
12. Monitoring and Profiling Tools
Now we’re getting into more advanced territory! There are several incredible tools that give you visibility into where your codebase needs more work–rather like a heat map for problems in your code. AppDynamics and New Relic are two tools that can give you deep visibility into web application performance and help you resolve those harder-to-locate issues.
Advanced configurations can also include setting benchmarks for application performance, possibly at a page or even code-block level and having alerts generated when thresholds are reached: certain database transactions running too slowly, a particular function in the code eating lots of memory or errors being generated and not being caught.
If you’re serious about getting your web application stable, whether to improve the user experience or weed out and remove costly ongoing maintenance tasks, you would do well to consider utilising a professional monitoring stack.
This post is one of 30 I wrote daily during April 2016 as part of the 30 Day Writing Challenge.