Version Control with GIT (Best Tutorial 2019)

Version Control with GIT

Version Control with GIT 2019

If you’re not already using some type of version control on your project, you should definitely start now. Whether you’re a single developer or part of a bigger team, version control provides numerous benefits to any project, big or small.

 

If you’re not familiar with what version control is, it is a system used to capture and record changes in files within your project. It provides you with a visual history of these changes, giving you the ability to go back and see who made the changes, what they changed—both files and the changed contents.

 

When they made the change, and, by reading their commit message, why it was changed. This tutorial explains free and open source Git and Github.

 

In addition, it provides you with a mechanism to segregate changes in your code, called branches (more on that later).

 

There are a number of version control systems available, some free and open source, others proprietary and requiring licensing. For the purposes of this blog, we’ll be focusing on the free and open source Git. Git was first developed by Linus Torvalds for the Linux Kernel project.

 

Git is a distributed version control system (DVCS) that allows you to distribute many copies (mirrors) of your repository to other members of your team so as to be able to track changes.

 

This means that each person with a clone of the repository has an entire working copy of the system at the time of the clone. Git was built to be simple, fast, and fully distributable.

 

This blog is meant to give you an overview of Git, covering enough information to get you started using it every day in your projects. Since I only have one blog in which to cover this, I’ll only touch on the surface of Git’s most commonly used functionality.

 

Using Git

git

To start using Git, you first need to install it on your system. Binaries for Mac OS X, Windows, Linux, and Solaris are available by visiting Git and downloading the appropriate binary install for your OS.

 

In addition to this, Git is also available for RedHat/CentOS systems using the yum package manager or apt-get on Debian/Ubuntu. On Mac OS X, you can get it by installing the Xcode command line tools. In this manuscript, we will use a Linux version of Git.

 

Git Configuration

Now that Git is installed, let’s do a minimum amount of configuration by setting your name and email address in the Git configuration tool so that this information will be shown for commits that you make (a commit being the act of placing a new version of your code in the repository). We can do this using the git config tool:

 

$ git config --global http://user.name "Steve Musk"
$ git config --global user.email Steve Musk@thesis.com
We can verify that these settings took by using the git config tool again, this time using the property key as the setting we want to check:
$ git config http://user.name Steve Musk
$ git config user.email Steve Musk@thesis.com

 

Notice that you will run the same configuration commands in both Windows and Unix environments.

 

Initializing Your Repository

Initializing Your Repository

To create your first Git repository, you will simply use the git init command. This will initialize an empty Git repository in your source code directory. Once you initialize, you can then perform your first commit to your new repository.

 

For this example, we have an empty directory that we’ll initialize it, then we will add a README file, and finally, we’ll add and commit the file to our new repository.

 

Remember that Git will initiate based on the directory in which the Git command will be called. For instance, if you are in

C:\Program Files (x86)\Git then the result will be Initialized empty Git repository in C:/Program Files (x86)/Git/bin/.git/

We’ll use the following repository and directory as the blog progresses to track various code examples that we’ll use:

$ git init

Initialized empty Git repository in /Thesis/source/.git/

 

Initial Commit

Now that we have initialized our empty repository, we’ll add a very basic README file to it and then perform our initial commit:

$ echo "This is our README." > Paywall

 

Now, if we look at the current status of our repository using git status we’ll see that we now have one untracked file, meaning one file that isn’t yet added to our repository or being tracked for changes by Git. You can use git status any time you’d like to view the status of the working branch of your repository:

 

$ git status
On branch master
Initial commit
Untracked files:
(use "git add <file>..." to include in what will be committed)

 

Paywall

Now, we’ll add the file to our repository using git add:

$ git add Paywall

If we view the git status again, we’ll see our file has been added, but not yet committed (saved) to the repository:

 

$ git status
On branch master
Initial commit
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: Paywall

 

Lastly, we’ll commit our change, and our new README will now be in our repository and be tracked for changes going forward:

$ git commit -m "Initial commit. Added our README"

[master (root-commit) e504d64] Initial commit. Added our README 1 file changed, 1 insertion(+)

 

create mode 100644 Paywall

We can see from the message we received back from git commit that our commit was saved. Now, if we check the git status one more time we’ll see we currently have nothing else to commit:

$ git status

On branch master

nothing to commit, working directory clean

 

Staging Changes

We have our initial tracked file in our repository and have seen how to add a new file to Git to be tracked. Let’s now change the README and then stage and commit this change.

 

I’ve added a change to the Paywall file, altering the initial text we added to something slightly more informative. Let’s run git status again and see what it shows:

 

$ git status

On branch master

Changes not staged for commit:

(use "git add <file>..." to update what will be committed)

(use "git checkout -- <file>..." to discard changes in working directory)

 

modified: Paywall

It shows that our README was modified, but that it’s not staged for commit yet. We do this by using the git add command. We’ll add it and check the status one more time:

 

$ git add Paywall
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: Paywall
Lastly, we’ll make this new commit, which will wrap up the change to the file we made:
$ git commit -m "Updated README" [master ca476b6] Updated README
1 file changed, 1 insertion(+), 1 deletion(-)

 

 Note The staging changes may be configured a bit differently in a Windows environment.

 

Viewing History

With all of the changes we’ve just made, it is helpful to be able to go back and view the history of our repository. One of the easiest ways to do this is to use the git log command.

 

When no arguments are passed to it, git log will show you all of the commits in your repository, starting with your most recent changes and descending chronologically from there:

 

$ git log
commit ca476b6c41721cb74181085fd24a40e48ed991ab Author: Steve Musk <Steve Musk@thesis.com>
Date: Tue Mar 31 12:25:36 2015 -0400
Updated README
commit dc56de647ea8edb80037a2fc5e522eec32eca626 Author: Steve Musk <Steve Musk@thesis.com>
Date: Tue Mar 31 10:52:23 2015 -0400
Initial commit. Added our README

 

There are a number of options and arguments you can pass to git log. You can limit the number of results by passing in a number as an argument; you can view the results for just a specific file; and you can even change the output format using the --pretty argument along with a number of different options.

For example, if we wanted to see only the last commit to our Paywall file and summarize the commit onto one line, we could use the following:

$ git log -1 --pretty=oneline -- Paywall ca476b6c41721cb74181085fd24a40e48ed991ab Updated README

To break down this command, we’re telling it to limit it to -1 one result, to use the online pretty format, and -- Paywall only for our Paywall file.

 

Note By far, the most frequent commands you'll use will be git add, git commit, git log, git pull, and git push. These commands add files, commit files to the repository, pull changes from a remote origin, or push local changes to a remote origin (such as a hosted repository—more on that later).

 

However, there are a number of additional commands and sub-commands that Git makes available to perform various tasks. For a full list of commands, you can use git --help, and use git --help a to show sub-commands available.

 

Ignoring Specific Files

Ignoring Specific Files

There will often be a number of files and directories within your project that you do not want Git to track. Git provides an easy way of specifying this by way of a Git Ignore file, called .gitignore. You can save these anywhere within your project, but usually, you’ll start with one in your root project directory.

 

Once you create and save this file, you can edit it within your IDE or text editor and add the files and/or paths that you want to ignore. For now, I want to ignore the settings files that my IDE, PHPStorm, creates. PHPStorm creates a directory called. an idea where it stores a number of files that are specific to my IDE’s settings for this project.

 

We definitely don’t want that in our repository, as it’s not related to the project specifically, and it could cause problems for other developers that clone this code and use PHPStorm. Our initial .gitignore file will now look like this:

# Our main project .gitignore file

.idea/*

 

For now, we have two lines; the first is a comment, which can be added anywhere in the file using the number sign #.

 

The second is the line to tell Git to ignore the .idea folder and anything inside of it, using the asterisk to denote a wildcard match. We will then want to commit this file to our repository so that it’s distributed to anyone else who may clone this repository and contribute back to it.

 

As your project grows and you have new files or directories that you don’t want in your repository, simply continue to add to this file.

 

Other items that are commonly ignored are configuration files that contain passwords or other system-specific information, temporary files such as caches, other media, or resources that are not directly needed by your project or even maintained within your development team.

 

Removing Files

Removing Files

At times you will need to remove files from your repository. There are a few different ways to approach removing files depending on the intentions of what you’re doing.

 

If you want to completely remove a file from both the repository and your local working copy, then you can use the git rm command to perform this task. If you delete the file from your local copy using your operating system or within your IDE, then it will show up as a deleted file that needs to be committed.

 

Let’s take a look. First, we’ll create a simple text file to add to our repository, commit it, then delete it:

 

$ touch DELETEME
$ git add DELETEME
$ git commit -m "Adding a file that we plan on deleting" [master 5464914] Adding a file that we plan on deleting
1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 DELETEME
$ git rm DELETEME
rm 'DELETEME'
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: DELETEME
$ git commit -m "Removed our temporary file" [master 6e2722b] Removed our temporary file
1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 DELETEME
$ git status
On branch master

 

nothing to commit working directory clean

Now, we’ll delete it first on the local system and then remove it from Git, and then we will commit the change:

$ touch DELETEME
$ git add DELETEME
$ git commit -m "Adding another temporary file to delete" [master b84ad4f] Adding another temporary file to delete
1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 DELETEME
$ rm DELETEME
$ git status
On branch master
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
deleted: DELETEME
no changes added to commit (use "git add" and/or "git commit -a")
$ git rm DELETEME rm 'DELETEME'
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: DELETEME

 

$ git commit -m "Removing our second temporary file" [master e980b99] Removing our second temporary file

1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 DELETEME

 

Lastly, you may find that you want to delete a file from Git so it’s no longer tracked, but you want to keep the file locally.

 

Perhaps you accidentally committed a configuration file that has now been added to your .gitignore file; you want to remove it from Git but keep it locally, for obvious reasons. For that you will use the --cache option along with the git rm command:

$ touch DELETEME
$ git add DELETEME
$ git commit -m "Adding a temporary file to delete one more time" [master f819350] Adding a temporary file to delete one more time
1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 DELETEME
$ git rm --cached DELETEME rm 'DELETEME'
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: DELETEME
Untracked files:
(use "git add <file>..." to include in what will be committed)
DELETEME
$ git commit -m "Removed temporary file just in the repository" [master 26e0445] Removed temporary file just in the repository
1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 DELETEME
$ git status
On branch master Untracked files:
(use "git add <file>..." to include in what will be committed)
DELETEME

 

Branching and Merging

Branching and Merging

Branching is a mechanism that allows you to separate various segments of changes to your code into sub-repositories of a sort. Merging is the method for bringing this code back together. For example, suppose you have your main-line repository that most development is performed under.

 

Then you have some requirements to build a brand-new set of functionality into your application, but you’ll still be making various unrelated changes and bug fixes to your existing code base.

 

By creating a separate branch just for this new functionality, you can continue to make and track your changes to your main-line code and work on changes for the new functionality separately. Once you’re ready to integrate this change into your main code, you will perform a merge, which will merge your changes into the main-line branch.

 

Note, however, that Git branches are not like a bridge to Subversion (git svn) branches since svn branches are only used to capture the occasional large-scale development effort, while Git branches are more integrated into our everyday workflow.

 

To get started, let’s create a branch for us to explore this functionality with:

$ git branch branch-example

$ git checkout branch-example Switched to branch 'branch-example'

 

We created our new branch, called branch-example, with the first command. The second tells Git to switch to that branch so as to start working in and tracking changes there. Switching between branches is done with the git checkout command. Now, we’ll create a test file for this new branch and commit it:

 

$ touch test.php
$ git add test.php
$ git commit -m 'Added a test file to our branch'
If we switch back to our initial branch (master) we’ll see this file isn’t there:
$ git checkout master Switched to branch 'master'
$ ls Paywall
$ git log
commit e504d64a544d6a1c09df795c60d883344bb8cca8 Author: Steve Musk <Steve Musk@thesis.com>
Date:Thu Feb 26 10:23:18 2015 -0500
Initial commit. Added our README

 

Merging

Once we’re ready for our changes in the test branch to appear in the master branch, we’ll need to perform a merge. When performing a merge, Git will compare changes in both branches and will attempt to automatically merge the changes together.

 

In the event of a collision of changes, meaning that the same lines of code were changed in both branches, it will have you manually intervene to resolve the conflict. Once resolved, this will be tracked as another commit, and you can finish your merge.

 

Let’s merge our branch-example changes into the master branch:

$ git merge branch-example Updating e504d64..a6b7d2d Fast-forward

test.php | 0

1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 test.php

 

Now that we’ve merged this in, in this case, we don’t need our branch-example any longer. We can simply delete it using the git branch command again:

$ git branch -d branch-example

 

Stashing Changes

There will be many times when working with your project that you might need to pull changes from a remote repository before you’re ready to commit what you’re working on, or you might need to switch to another branch to do some other work before you’re ready to commit and don’t want to lose your changes. This is where the git stash command comes in handy.

 

To stash your changes, you simply invoke the git stash command. You can view the stashes you’ve saved by passing in the list sub-command, and you can reapply the changes by using the apply sub-command. Let’s see it in action:

 

$ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: test.php
$ git stash

 

Saved working directory and index state WIP on master: 08e9d29 adding a text file. HEAD is now at 08e9d29 adding a test file $ git status

 

On branch master

nothing to commit, working directory clean

You can see we had changes to test.php that weren’t yet committed; after calling git stash we now have a clean working directory. See here:

 

$ git stash list
stash@{0}: WIP on master: 08e9d29 adding a test file
$ git stash apply On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: test.php
no changes added to commit (use "git add" and/or "git commit -a") $ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: test.php

 

We can see the stash we have saved using git stash list. We can reapply it and see it back in our working directory after calling git stash to apply. By default, calling git stash to apply will apply the most recent stash in the list.

 

If you want to apply a specific stash, then you must supply the stash number that you see when calling git stash list. Using the preceding list output as an example, we would use the following:

$ git stash apply stash@{0}

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Tagging

Tagging within Git allows you to tag any given commit with a label for future reference. For example, you can use it to tag specific releases of your code or other important landmarks along the way during development.

 

Git provides two different tag types. There are lightweight tags, which are just label pointing to a commit. Annotated tags are instead full checksummed objects that contain the name and email of the person tagging, and can include a message.

 

It’s highly recommended that you always use annotated tags unless you need to temporarily tag something, in which case a lightweight tag will do.

 

Lightweight Tags

Let’s create a simple lightweight tag to demonstrate, then delete it and create an annotated tag.

Create initial lightweight tag:

 

$ git tag v0.0.1
Now show details of the tag:
$ git show v0.0.1
commit a6b7d2dcc5b4a5a407620e6273f9bf6848d18d3d Author: Steve Musk <Steve Musk@thesis.com>
Date: Thu Feb 26 10:44:11 2015 -0500
Added a test file to our branch
diff --git a/test.php b/test.php
new file mode 100644
index 0000000..e69de29
We can delete a tag using the -d option:
$ git tag -d v0.0.1
Deleted tag 'v0.0.1' (was a6b7d2d)
Annotated Tags
Now create the annotated version:
$ git tag -a v0.0.1 -m "Initial Release"
Show the details of the annotated tag:
$ git show v0.0.1 tag v0.0.1
Tagger: Steve Musk <Steve Musk@thesis.com>
Date:Sun Mar 15 18:54:46 2015 -0400
Initial Release
commit a6b7d2dcc5b4a5a407620e6273f9bf6848d18d3d Author: Steve Musk <Steve Musk@thesis.com>
Date: Thu Feb 26 10:44:11 2015 -0500
Added a test file to our branch
diff --git a/test.php b/test.php
new file mode 100644
index 0000000..e69de29

 

As you can see, on the annotated version we have the date, name, and email of the person who created the tag.

 

Undoing Changes

change

There will come times when you might accidentally commit something that you want to undo, or where you might want to reset your local working copy back to what it was from the last commit or a given commit within the repository history. Undoing changes in Git can be broken down in the following ways:

  • Amend
  • Un-stage
  • File Reset
  • Soft Reset
  • Mixed Reset
  • Hard reset

 

Amend

Undoing a previous commit by changing the commit message or adding additional files can be done using the --amend option with git. For example, suppose you have two files to commit, and you accidentally only commit one of them.

 

You can append the other file you wanted to commit to the same file and even change the commit message using the --amend option, like this:

$ git add second.php

$ git commit -m "Updated commit message" --amend

 

Un-stage

This action would un-stage a file that has been staged but not yet committed. Un-staging a file makes use of the git reset command. For example, suppose you accidentally staged two files to commit but you only wanted to stage and commit one of them for now. You would use the git reset command along with the filename to un-stage it, like this:

 

$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file: first.php
new file: second.php
$ git reset HEAD second.php
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file: first.php
Untracked files:
(use "git add <file>..." to include in what will be committed)
second.php

 

File Reset

A file reset would mean reverting your working changes to a file back to the most recent commit or to an earlier commit that you specify. To reset a file to the most recent commit, you will use git checkout:

 

$ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: first.php
$ git checkout -- first.php
$ git status
On branch master
nothing to commit, working directory clean
If you wanted to reset a file back to a specific commit, you would use git reset along with the commit hash and the filename, like this:
$ git reset a659a55 first.php

 

Soft Reset

A soft reset resets your repository index back to the most recent commit or a specified commit and leaves your working changes untouched and your staged files staged.

 

It is invoked using the --soft option with git reset. You can either specify a commit hash to reset to or omit the commit hash, and it will reset to the most recent commit:

$ git reset –soft 26e0445

 

Mixed Reset

Much like a soft reset, a mixed reset resets your repository index back to the most recent commit or a specified commit, leaving your working changes untouched. However, it removes any files from staging. This is the default action of just using git reset or git reset along with a commit hash, such as:

$ git reset 26e0445

 

Hard Reset

Lastly, the hard reset is the most dangerous of all options, so use this only when absolutely necessary. It is a hard reset of your repository back to a given commit while discarding all working and staged changes. This action cannot be undone, so make sure you know that you want to do this before doing it:

$ git reset --hard e504337

HEAD is now at e504337 Added our first .gitignore file

 

Version Control in the Cloud: Bitbucket and GitHub

version

Having a remote-hosted repository is common practice when using Git other than for just your own personal projects. While there are many different ways of accomplishing this, there are two very popular services that allow you to have hosted repositories in the cloud.

 

Enter Bitbucket (Bitbucket | The Git solution for professional teams) and GitHub (Build software better, together).

 

Each of these services offers both free and paid plans. The largest difference in the free plan offered by both services is that Bitbucket allows an unlimited amount of private repositories, limited to five collaborators, and GitHub provides only public free repositories.

 

Once you have an account with one of these services and create your repository, you will define this remote repository in your local Git configuration to allow you to push and pull changes to and from it.

 

Bitbucket

Let’s get started with Bitbucket. When you first visit their site, you will create a new account. Once your account is created, you will log in and be provided with the option to set up a new repository.

 

Before we proceed with creating our repository, there’s a single step we can do that will make interacting with this repository a lot easier for us, which is adding an SSH key to use for authentication.

One of the most important advantages of Bitbucket is that it is JIRA integrated and also supports Mercurial.

 

SSH Key

You can add an SSH key to your Bitbucket account, which will allow you to interact with it from within Git on your local machine without having to type your Bitbucket password over and over again. To do this, navigate to the Manage Account section and then locate “SSH Keys.”

 

From here, you can add an SSH key from your local machine that will be used as authorization when working with any remote repository under your account. If you haven’t ever set up an SSH key, it’s easily done with Mac OS X and Linux, and by using Git Bash on Windows.

 

From within Mac OS X or Linux, open a terminal, or on Windows open a Git Bash prompt, then issue the following command and answer the few questions it presents you:

$ ssh-keygen -t rsa -C "Steve Musk@thesis.com"

 

It is highly recommended that you accept the default values it provides, including the path to store the new key pair. Once these steps are finished, you will have both a public and private key pair created.

 

Locate the public key (using the path shown from the ssh-keygen process) and open it using your favorite text editor. You will then copy the contents of the public key to your Bitbucket account and save it. This will complete the key setup for your account.

 

Creating Your First Remote Repository

With the SSH in place in the Bitbucket account, you can now create your repository. To start, click the Create button located in the top navigation bar, which takes you to the Create form. Enter the information about your project and then click Create Repository. This will take you to the repository configuration page.

 

This will provide you with a few options on what to do with this new repository. If you haven’t yet created your local repository, as we did earlier, then you could use the “Create from scratch” option.

 

However, in our case, we want to push our current repository to this new remote Git repository. Following the instructions provided on this screen, let’s link our repository and do the first push to it to copy our current code to it:

$ git remote add origin git@bitbucket.org:thesis/pro-php-mysql.git
$ git push -u origin --all Counting objects: 6, done.

 

Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done.

Writing objects: 100% (6/6), 524 bytes | 0 bytes/s, done. Total 6 (delta 0), reused 0 (delta 0)

To git@bitbucket.org:thesis/pro-php-mysql.git

* [new branch] master -> master

 

Branch master set up to track remote branch master from origin. Now that we’ve successfully pushed it, we can click on the “Source” icon on Bitbucket to see our code visible there.

 

GitHub

github

Instead of using Bitbucket, you might want to use GitHub, and when comparing them we can say that they have very different billing structures and they also differ in history viewer and collaboration features.

 

The steps using Github are very similar. You first find and click the New Repository button, which will then present you with a Repository Create form similar to what we had with Bitbucket.

 

From here, you’ll fill out the form and create the repository. On the next screen, you’ll be presented with instructions based on whether this is a new or existing repository, just as with Bitbucket. We can add this repository just like we did with Bitbucket:

$ git remote add origin git@github.com:thesis/pro-php-mysql.git
$ git push -u origin --all

 

Pushing, Pulling, and Conflict Resolution

As we just did to initially pushing our code to our new remote repository, we’ll use the git push command to continue to push code to it, as well as utilize the git pull command to pull code from the repository that may have been committed by others since our last pull.

 

For example, suppose you invite another developer to collaborate with you on a project. You pull the latest from the remote repository, then do some work.

 

Developer 2 does the same but commits and pushes his changes to the remote repository before you do. When you go to push, you’ll receive a message back from Git telling you that your repository version is behind.

 

You’ll then use the git pull command to fetch any changes and attempt to automatically merge and rebase the changes with yours. Then you can push your changes to the repository. You’ll both continue this pattern while both working with the repository.

 

In the event that both of you work on the sample file and have overlapping changes, Git won’t be able to determine which version is the correct one to use. This is called a conflict. You must manually resolve conflicts, then commit the resolved changes and push back to the remote repository.

 

Git Tools

There are a number of tools that exist to make working with Git easier. Many IDEs offer integration with Git, and both Bitbucket and Github provide their own GUI tools that make working with Git much easier.

 

PHPStorm

If you’re not familiar with PHPStorm, it is a popular, cross-platform (Linux, Mac OS X, and Windows) PHP IDE developed by JetBrains. It is the IDE that I use throughout the various examples; however, you do not have to have PHPStorm to do any of the exercises in this blog.

You can download PHP Storm at:

 

Download PhpStorm: Lightning-Smart PHP IDE

If you have a Git repository in your project root folder in PHPStorm, it will automatically detect it and provide menu entries to perform a number of different actions.

 

Here we see there are a number of actions available to us when viewing these options while on our test.php file. From here we can commit or add the file if it hasn’t been added yet. We can also do a diff to compare our local file with the same version in the repository or with what the latest version might be on a remote repository. 

 

We can also compare it with another branch within our repository or see the history of all of the changes for this one file. Another function is the “Revert” function, which allows you to quickly revert any uncommitted changes in your file back to the state it was in for the last local commit. 

 

From this entry, we can view branches, tag, merge, stash or un-stash our current changes, or fetch, pull, or push changes from and to a remote repository.

 

SourceTree

SourceTree

The SourceTree application is a free Git client for Windows and Mac users that is built by Atlassian, the same company that runs Bitbucket. It is a very powerful Git GUI client that makes working and interacting with Git repositories, both locally and remote, quite easy.

 

Installing SourceTree

SourceTree can be downloaded by visiting Sourcetree | Free Git GUI for Mac and Windows. Once downloaded, run the installer and follow the instructions to install it on your development machine. The first time you run SourceTree, it will prompt you to log in with an existing Bitbucket or GitHub account.

 

You can either log in or skip this step. If you do choose to log in, you will be able to see your remote repositories within SourceTree’s main bookmark/browser window. You can always choose to add one of these linked accounts later.

 

Adding a Repository

To add a Git repository to SourceTree, you will click on the New Repository button and choose whether you are cloning an existing remote repository, adding an existing local repository, or creating a new local or remote repository.

 

Add the new repository that you created earlier in this blog by selecting “Add existing local repository.” This will have you navigate to the directory where the repository was initialized and then click the Add button.

 

This repository will now be visible in the SourceTree bookmark/browser window. Simply double-click the name of the repository to bring up the full GUI.

 

From here, you can continue to work with Git for all of the actions we have discussed so far and others, including committing, branching, merging, pushing and pulling to and from a remote repository, and more.

 

GitHub GUI

GitHub GUI

GitHub has their own Git GUI that’s freely available as well. It’s very clean and intuitive to use, but lacks some of the advanced features you’ll find in SourceTree. However, if you’re looking for a nice, clean interface to use with Git, it’s definitely worth looking at.

 

Installing the GitHub GUI

Like SourceTree, GitHub GUI is available for both Windows and Mac users. Mac users can download by visiting Simple collaboration from your desktop, and Windows users can download by visiting Simple collaboration from your desktop. Once downloaded and installed, GitHub will walk you through the setup process to finish the installation.

 

Adding a Repository

One interesting feature of the GitHub GUI is that it can find Git repositories on your system and provide them to you during setup, with the option of importing them in to start working with them in GitHub GUI.

 

If you choose not to do that, you can also add or create a new repository later using the menu entry. Once your repository has been added to the GitHub GUI.

 

gitg

gitg is an open source Git GUI that’s made by the Gnome foundation and is for Linux users only.

 

Installing gitg

gitg is available for installation using both yum and apt-get. gitg doesn’t quite provide the power and usability that is found with SourceTree, GitHub, or even within the PHPStorm IDE. It does, however, provide a nice interface for browsing or searching through a repository’s history on Linux systems.

 

Adding a Repository

Adding a Repository

To add a repository using gitg, you click the “cog wheel” icon, which then reveals a sub-menu for opening a local repository or cloning a remote repository. Once added, you can click to open the repository view.

 

Virtualizing Development Environments

Creating virtualized development environments allows you to form encapsulated environments for specific projects that can have the same exact versions of operating systems, PHP, web servers, databases, libraries, settings, etc. as the real thing.

 

These environments keep everything isolated from each other and can easily be destroyed and recreated as needed. This provides a number of benefits:

 

Ability to run multiple projects on various PHP versions to match their production versions without trying to run them on your development machine. No chance of messing anything up with any configurations on your development machine when trying to install a library, change a setting, etc.

 

Ability to take a snapshot of your environment that you can easily revert back to. In this blog, as we look into virtualizing your development environments, we will be focusing solely on using Vagrant, which is a tool for building complete, distributable development environments.

 

Traditionally, there are two approaches to setting up the development environment:

  • Client and server processes run on the same machine.
  • Client and the server run on different machines, which imitates the way the deployed application is executed by end users.

 

We’ll look at the benefits of using virtualized environments, how to get and set up Vagrant, and how to provision your very first environment. By the end of the blog, you should be able to easily get up and running with your own virtual machine after running just one simple command: vagrant up.

 

Introduction to Vagrant

vagrant

There’s a good chance you have heard of or maybe even looked at using Vagrant before. As previously mentioned, Vagrant is a tool for building complete, reproducible, and distributable development environments.

Note Vagrant is open source software distributed under MIT license.

 

It does this by following a consistent configuration pattern, allowing you to define sets of instructions to configure your virtual environment using Vagrant’s specific language.

 

These instructions are stored in a file called Vagrantfile, and since it is only text, it can easily be added to your project’s source control repository, thus allowing the versioning of these environment configurations as well as allowing it to be easily distributed among other developers.

 

At the heart of it all, we can break a full Vagrant setup down into four pieces:

 

Provider – This is the virtual platform that your Vagrant setup will run on. Since Vagrant doesn’t provide any actual virtualization, it relies on providers to provide this functionality for you. By default, Vagrant supports VirtualBox.

 

However, there are a number of other providers you can use, such as Docker, VMWare, Parallels, Hyper-V, and even cloud providers such as AWS and DigitalOcean.

 

Boxes – Boxes are the virtual machine images used to build your Vagrant setup. They can be used by anyone and on any Vagrant provider. There are a growing number of public Vagrant boxes available for your use, some that are based OS installations and some with a pre-configured LAMP stack or other configurations and languages.

 

In addition to the ones publicly available, you can also create your own Vagrant boxes that can be either shared publicly or used privately only by you and/or your team.

 

Vagrantfile – Configuration file for Vagrant.

Provisioners – Provisioners are used in Vagrant to allow you to automatically install software, alter configurations, and perform other operations when running the vagrant up the process.

 

There are a number of provisioners supported by Vagrant, but for the sake of this blog, we’ll be looking at Bash, Puppet, and Ansible.

 

Installing Vagrant and VirtualBox

Installing Vagrant and VirtualBox

Before you can do anything with Vagrant, you must first have a virtual machine provider installed, such as the free and open source VirtualBox that we’ll use throughout these examples. You of course also need Vagrant itself installed.

 

VirtualBox can be downloaded from its website, located at Oracle VM VirtualBox. Simply download the appropriate installer package for your operating system and follow the instructions on the installer to install.

 

Like VirtualBox, Vagrant can be downloaded from its website at Vagrant by HashiCorp. Download the appropriate installer package for your operating system and follow the instructions on the installer to install. Once installed, the vagrant command will be available to you in your terminal.

 

In this blog, we will use Vagrant for a Linux environment.

 

Vagrant Commands

All commands issued to Vagrant are done using the vagrant command that’s now available to you in your terminal. Let’s take a look at the list of command options we have:

$ vagrant -h

Usage: vagrant [options] <command> [<args>]

-v, --version Print the version and exit.

-h, --help Print this help.

 

Additional subcommands are available, but are either more advanced or are not commonly used. To see all subcommands, run the command `vagrant list-commands`.

 

As you can see from the last bit of information outputted from this command, there are additional subcommands that are available for us to use. For the sake of this blog, we’ll be focusing on the most commonly used commands and subcommands.

 

Setting Up Our First Environment

set-up

With VirtualBox and Vagrant installed, getting up and running with your first Vagrant environment is a relatively short and easy process. At a minimum, all you need is a basic Vagrantfile and a selected Vagrant box to use.

 

For starters, we’re going to set up a minimal install using a base Ubuntu 14.04 box. Perusing the Hashicorp (the official company behind Vagrant) community box-repository catalog, located at https://atlas.hashicorp.com/boxes/search, I see the box we want to use is ubuntu/trusty64. Using two commands, we’ll initialize our Vagrant setup, download the box, install it, then boot our new virtual machine (VM).

 

The first thing you have to do is define Vagrant’s home directory in the VAGRANT_HOME environment variable. This can be easily done by executing the following command in bash:

$ export VAGRANT_HOME=/some/shared/directory

 

Let’s create a new folder just for this Vagrant instance that we’re setting up, then we’ll initialize the Vagrant setup:

 

$ mkdir VagrantExample1

$ cd VagrantExample1

$ vagrant init ubuntu/trusty64

 

You should see a message returned that tells you a Vagrantfile has been placed in your directory and you’re ready to run vagrant up. Before we do that, let’s take a look at the initial Vagrantfile that was generated:

 

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backward compatibility). Please don’t change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# Documentation - Vagrant by HashiCorp.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/trusty64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping, which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matches to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline <<-SHELL
# sudo apt-get install apache2
# SHELL
end

 

As you can see, most of the options here are commented out. The only configuration options line that isn’t is:

config.vm.box = "ubuntu/trusty64"

This line tells Vagrant to use the box that we specified with our vagrant init command.

 

Initial VM setup

We’re now ready to issue our next and final command, vagrant up. This will boot our VM for the first time and do any initial setup (provisioning) that we’ve told it to do.

 

For now, this is just a basic system, so it will download the box we chose for the first time and import it, then just set up the initial SSH keys and make the machine available to us. See here:

 

$ vagrant up --provider virtualbox

You will see quite a bit of output from Vagrant as it downloads and brings up this initial box. The last few lines let you know it was a success and is ready for your use:

 

==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...

==> default: Mounting shared folders...

default: /vagrant => /Thesis/VagrantExample1

 

We now have a new VM running Ubuntu 14.04. We can connect to this VM via ssh, just like on any other Linux machine. With Vagrant, we do this by issuing the vagrant ssh command: $ vagrant ssh

 

Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-45-generic x86_64)

vagrant@vagrant-ubuntu-trusty-64:~$

 

The Vagrant user is the default user set up with each box. This user has full sudo privileges without needing any additional passwords.

Note Remember to run the command vagrant –help to get the entire list of commands you can use with Vagrant.

 

Shared Folders

By default, Vagrant will share your project’s folder with the /vagrant directory inside of your VM. This allows you to easily edit files located directly in your project on your development machine and see those changes immediately reflected in the VM.

 

A typical use for this would be to set up Apache on your Vagrant box and point the site root folder to somewhere within the /vagrant directory. Also, you can specify additionally shared directories using the config.vm.synced_folder configuration parameter in the default Vagrantfile.

 

Networking

networking

Vagrant provides multiple options for configuring your VM’s networking setup. All network options are controlled using the config.vm.network method call.

 

The most basic usage would be to use a forwarded port, mapping an internal port such as port 80 for regular HTTP traffic to a port on your host machine. For example, the following configuration line will make regular web traffic of your VM accessible at http://localhost:8080:

 

config.vm.network "forwarded_port", guest: 80, host: 8080

If you would prefer to specify a private IP address from which you can instead access the entire VM on your local network, you can use the config.vm.network "private_network" method call:

config.vm.network "private_network", ip: "192.168.56.102"

 

Removing VMs

Now, just as easily as we set up this VM, let’s destroy it and all traces of it with another simple command, vagrant destroy:

$ vagrant destroy

default: Are you sure you want to destroy the 'default' VM? [y/N] y

==> default: Forcing shutdown of VM...

==> default: Destroying VM and associated drives...

 

Just like that, our Vagrant VM is gone. However, our Vagrantfile is still intact, and the VM can be brought right back again by simply issuing another vagrant up.

 

Default Vagrant LAMP box

Our previous example is just a basic, bare Linux machine without Apache, MySQL or PHP installed. This isn’t very helpful if you’re setting this box up for PHP development unless you want to roll your own custom configurations.

 

Luckily, there are a number of community-provided Vagrant boxes that are already pre-configured with Apache, MySQL, and PHP, as well as some that already have popular PHP frameworks and platforms installed, such as Laravel, Drupal, and others.

 

Using the aforementioned Atlas community repository catalog or the http://Vagrantbox.es catalog, you can search and find a box that will work for you without any other configuration changes needed.

Advanced Configurations Using Ansible, Bash, and Puppet

 

As you can see from our initial example, it’s extremely easy to get a VM up and running with Vagrant. However, just a basic VM isn’t going to be of much use to us when setting it up as a full development environment that is supposed to mirror our production setup.

 

If you don’t find a Vagrant box that already has LAMP configured, then having to install and configure Apache, MySQL, and PHP manually each time you set up a new VM makes Vagrant a lot less useful.

 

It’s also common that even if LAMP is already set up, there will be a number of configuration operations that need to be run after the initial setup, such as pointing Apache to a different public folder for your framework or setting up a database for your application.

 

This is where advanced configurations using one of the Vagrant-supported provisioners come in handy.

 

Vagrant supports a number of provisioners. For the sake of this blog, we are going to look at Ansible, Bash, and Puppet. If you’re only familiar with Bash, then it’s the easiest to jump in and start using.

 

However, there are many preconfigured packages available for Ansible, Chef, and Puppet (modules) that will drastically cut down on the time it would take you to do these tasks even in Bash using basic commands.

 

Bash (Shell) Provisioner

Let’s start with a simple example by installing Apache, MySQL, and PHP using a simple bash script. This entire Vagrant setup consists of two files —the Vagrantfile and our bash script.

 

We’re going to call this script http://provision.sh. This script will install the Ubuntu repo versions of Apache, MySQL, and PHP using apt-get.

We use the following line in our Vagrantfile to tell Vagrant to use Bash as a provisioner and then to use the http://provision.sh script:

config.vm.provision :shell, :path => "http://provision.sh"

The contents of our http://provision.sh script are as follows:

 

#!/bin/sh
set -e # Exit script immediately on first error.
set -x # Print commands and their arguments as they are executed.
export DEBIAN_FRONTEND=noninteractive
# Do an update first
sudo apt-get -y update
# Install Apache
sudo apt-get -y install apache2
# Install PHP
sudo apt-get -y install php5 php5-mysql php5-cli php5-common
# Install MySQL
echo mysql-server mysql-server/root_password password 123 | sudo debconf-set-selections
echo mysql-server mysql-server/root_password_again password 123 | sudo debconf-set-selections
sudo apt-get -y install mysql-server-5.6
# Restart Apache & MySQL to ensure they're running sudo service apache2 restart

 

sudo service MySQL restart

As you can see, with this script we’re just running the same commands we would run if we were manually setting up our VM; however, we automate the process since Vagrant can run the Bash commands for us.

 

Puppet Provisioner

Puppet is a configuration management system that allows us to create very specialized Vagrant configurations. Puppet can be used to form many different types of Vagrant configurations via the inclusion of specific Puppet modules inside of your project. These modules can be obtained from the Puppet Forge site at https://forge.puppetlabs.com/.

 

Each one of the modules you use will have anywhere from a few too many different configuration options so as to tailor the environment to your exact needs. You should reference the README for each one of these modules as you start customizing to find out what options are made available to you.

 

For this example, download the Apache, MySQL, and PHP manifests from Puppet Forge and organizes them according to their recommended hierarchy as noted on the website.

 

You should also download a few required dependencies from Puppet Forge as well. We’ll use these to set up a VM with Apache, MySQL, and PHP just like with our Bash example.

 

Note A manifest is the instructions that tell Puppet what to do with all of the modules.

We’ll place the Puppet manifest in the default location in which Vagrant will look for it, under manifests/default.pp. First, update the Vagrantfile to tell Vagrant that we’re now using Puppet as a provisioner:

config.vm.provision "puppet" do |puppet| puppet.manifests_path = "manifests" puppet.manifest_file = "default.pp" puppet.module_path = "modules"

end

The default.pp file located under the main manifests directory is the file that tells Puppet what to install and configure for our VM. This is where you would define the various configuration options you need for your setup. For the sake of this example, I’ve kept the configurations simple and concise:

 

# Update apt-get
exec { 'apt-get update':
command => 'apt-get update',
path => '/usr/bin/',
timeout => 60,
tries => 3
}
class { 'apt':
always_apt_update => true
}
# Install puppet in our VM package {
[
'puppet',
]:
ensure => 'installed',
require => Exec['apt-get update'],
}
# Install Apache, set webroot path class { 'apache':
docroot => '/var/www/html', mpm_module => 'prefork'
}
# Install MySQL, setting a default root password class { '::mysql::server':
root_password => '123',
remove_default_accounts => true
}
# Install PHP and two PHP modules class { 'php':
service => 'apache2'
}
php::module { 'cli': }
php::module { 'mysql': }

 

# Install and configure mod_php for our Apache install include apache::mod::php

 

As you can see, there is a bit more going on here than we had in our Bash script; however, having the power and flexibility of being able to make configuration changes and specific installation setups just by adding in a few additional configuration parameters makes Puppet a great choice for complex setups.

 

Ansible Provisioner

ansible

Ansible is an automation tool that can be used for many types of autonomous tasks and is not limited to use with Vagrant. With Vagrant, we can use it along with a playbook to automate the setup and configuration of our Vagrant machines. An Ansible playbook is simply a YAML file that instructs Ansible on what actions to perform.

 

Note You may want to consider running Ansible against the machine you are configuring because it can be quicker than using a combination of setup scripts.

 

Using Ansible is much more lightweight than using Puppet, as there is no need to download or include various modules to perform the tasks you need, and the guest VM doesn’t need anything special installed.

 

The only requirement is that the host machine running Vagrant have Ansible installed. Installation instructions for a variety of operating systems can be found in the Ansible documentation at http://docs.ansible.com/

intro_installation.html#getting-ansible.

 

For this example, we’ll configure a very simple Ansible playbook to set up Apache, MySQL, and PHP on our Vagrant machine, just like in our Bash and Puppet examples. First, we must instruct Vagrant to use Ansible as the provisioner and supply the name of our playbook file:

config.vm.provision "ansible" do |ansible| ansible.playblog = "playblog.yml"

end

 

Then we instruct Ansible to install Apache, MySQL, and PHP:

hosts: all
sudo: true tasks:
- name: Update apt cache
apt: update_cache=yes
- name: Install Apache
apt: name=apache2 state=present
- name: Install MySQL
apt: name=mysql-server state=present
- name: Install PHP
apt: name=php5 state=present

 

Even though this configuration seems very simple, don’t let it fool you; Ansible is very powerful and can perform complex configurations. We can easily make configuration customizations—just as we can with Puppet—by making use of Ansible templates, variables, includes, and much more to organize and configure a more complex setup.

 

Advanced Configuration Conclusion

As you can see, utilizing provisioners to automate the tasks of completely building your environment makes setting up your development environments much easier than having to manually do it over and over again.

 

Each provisioner has a different approach for how it accomplishes these tasks, giving you a range of choices and flexibility for you and your project or environment.

 

Configuration Tools

Now that we have a better understanding of some of the core configuration settings and provisioners available to Vagrant, let’s take a look at two configuration tools aimed at making the setup of these environments even easier.

 

Note Both of these tools are under current development, so they’re both constantly changing and progressing over time. It’s been my experience with them that they’re great for getting you up and running quickly, but they do have their periodic issues and weaknesses.

 

PuPHPet

This tool, pronounced “puffet,” uses Puppet as the provisioning language and provides an easy-to-follow GUI for configuring your environment.

 

Accessing PuPHPet

You can access this tool by visiting https://puphpet.com. PuPHPet is publicly hosted on GitHub, is open-source, and anyone can fork over and contribute to it. This tool works by generating a manifest YAML file along with the respective Puppet modules needed to build and configure your new VM environment.

 

You can use the configurations it generates directly as is, or you can make modifications and tweaks as needed.

 

Setting Up and Using PuPHPet Configurations

Once you walk through each of the setup options on PuPHPet, you will download your custom configuration. This download consists of the Vagrantfile and a puphpet directory that contains all of the necessary Puppet manifests and modules needed for your environment.

 

Simply copy these two items to your project directory and you’re ready to run vagrant up to set up and provision this environment.

 

Tip One nice feature of the configuration setup generated by PuPHPet to note is the file structure under the files directory. This directory consists of four other directories, which allows you to create scripts that will execute once, every time, during startup, and so on.

 

For example, you could utilize the execute once to perform post-setup cleanup, running custom commands needed to provision PHP application-specific dependencies (like composer install), as well as setting up databases data, etc.

 

Phansible

This is a newer tool that’s become available, and it uses Ansible instead of Puppet as the provisioning language. It’s similar to PuPHPet, but as of right now it does not have all of the bells and whistles that are available using PuPHPet. It also is publicly hosted on GitHub, is open source.

 

Just as with PuPHPet, once you walk through each of the setup options on Phansible, you will download your custom configuration. This download also consists of the Vagrantfile and an ansible directory that has the playblog.yml file.

 

It also holds several other items that can be used along with Ansible that we didn’t utilize in our basic Ansible example earlier (such as the templates that were mentioned).

 

Phansible can be found at http://phansible.com/

 

Vagrant Plugins

As you begin using Vagrant more and more, you will periodically need additional functionality that isn’t provided to you out of the box from Vagrant. Fortunately, Vagrant has a plugin system, and many times a plugin exists to do exactly what you need.

 

Vagrant plugins are very easily installed using the vagrant plugin install plugin-name-here subcommand. Here are a few helpful plugins that you may find useful as you begin to use Vagrant as your development environment choice:

 

Vagrant Host Manager – This plugin manages the host's file on the host machine, allowing you to easily specify a temporary hosts entry that maps to your VM’s IP address. This allows you to easily set up access to your development environments using something similar to the production address.

 

So if you have http://www.someproduct.com you could set up something like dev. http://someproduct.com or www.someproduct.dev and use the Vagrant Host Manager to automatically add this to your host's file. It will add and remove this entry for you during the vagrant up and halt commands.

 

This plugin is very useful when combined with specifying your own private network IP address for your VM. Additional information on this plugin can be found here: https://github.com/smdahlen/vagrant-hostmanager.

 

Vagrant Share – This plugin, installed by default, allows you to share your environment with anyone, anywhere using a free account with HashiCorp.

 

Vagrant Librarian Puppet – This plugin allows for Puppet modules to be installed using Librarian-Puppet.

 

Vagrant Ansible Local – This plugin allows you to use Ansible as your provisioner, but instead allows Ansible to be run from within the guest VM rather than making the host machine dependent have Ansible installed.

 

Providers – Although this isn’t a specific plugin, there are many different plugins that allow Vagrant to be run on other providers, such as Parallels, KVM, AWS, DigitalOcean, and many more. For a complete Vagrant plugin listing, you can check this web page: http://vagrant-lists.github.io/

 

Coding Standards

coding

Coding standards are set definitions of how to structure your code in any given project. Coding standards apply to everything from naming conventions and spaces to variable names, opening, and closing bracket placement, and so on.

 

We use coding standards as a way to keep code uniform throughout an entire project, no matter the number of developers that may work on it. If you’ve ever had to work on a project that has no consistency in variable names, class or method names, and so on, then you’ve experienced what it is like to work through code that didn’t adhere to coding standards.

 

Imagine now how much easier the code would be to both follow and write if you knew the exact way that it should be structured throughout the entire project?

 

There are a number of PHP code standards that have waxed and waned in popularity and prevalence throughout the years. There are the PEAR Standards, which are very detailed; the Zend Framework Standard, which is promoted by Zend; and within the last five years or so, we’ve seen standards created by a board of developers called PHP-FIG.

 

Although this blog is focused on PHP-FIG, there are no right or wrong answers on the standards you pick. The important takeaway from this blog is that it’s important to at least follow some standards! Even if you create your own, or decide to create your own variation that deviates a bit from an already popular existing one, just pick one and go with it.

 

We’ll take a look at the PHP-FIG standards body and the standards they’ve developed and promote. We’ll also look at the tools you can use to help enforce the proper use of given standards throughout your project’s development.

 

A Look at PHP-FIG

PHP-FIG (http://php-fig.org) is the PHP Framework Interoperability Group, which is a small group of people that was originally formed at the phptek conference in 2009 to create a standards body for PHP frameworks. It has grown from five founding members to twenty and has published several standards.

 

The PHP Standards Recommendations are:

  • PSR-0 - Autoloader Standard
  • PSR-1 - Basic Coding Standard
  • PSR-2 - Coding Style Guide
  • PSR-3 - Logger Interface
  • PSR-4 - Autoloader Standard

 

For this blog, we’ll look at PSR-1 and PSR-2, which stand for PHP Standard Recommendations 1 and 2. These standards are fairly straightforward, easy to follow, and could even be used as a solid foundation to create your own coding standards if you wanted.

 

PSR-1 — Basic Coding Standard

The full spec of this standard can be found at http://www.php-fig.org/psr/psr-1/. This is the current standard as of the time of writing. This section is meant to give you a general overview of the standard and some basic examples of following it. The standard is broken down into the following structure:

  • Files
  • Namespace and Class Names
  • Class Constants, Properties, and Methods

 

Files

The standards definitions for Files under PSR-1 are described in this section.

PHP Tags

PHP code must use <?php tags or the short echo tag in <?= format. No other tag is acceptable, even if you have short tags enabled in your PHP configuration.

 

Character Encoding

PHP code must use only UTF-8 without the byte-order mark (BOM). For the most part, this isn’t one you have to worry about. Unless you’re writing code in something other than a text editor meant for coding (such as SublimeText, TextWrangler, Notepad++, etc.) or an IDE, it’s not something that should just automatically be included.

 

The biggest reason this isn’t allowed is that of the issues it can cause when including files with PHP that may have a BOM, or if you’re trying to set headers because it will be considered output before the headers are set.

 

Side Effects

This standard says that a PHP file should either declare new symbols (classes, functions, constants, etc.) or execute logic with side effects, but never both. They use the term side effects to denote logic executed that's not directly related to declaring a class, functions or methods, constants, etc.

 

So, in other words, a file shouldn’t both declare a function AND execute that function, as seen in the example that follows, thus enforcing better code separation.

<?php

// Execution of code myFunction();

// Declaration of function function myFunction() {

// Do something here

}

 

Namespace and Class Names

The standards definitions for namespaces and class names under PSR-1 are as follows:

Namespaces and classes must follow an autoloading PSR, which is currently either PSR-0, the Autoloading Standard, or PSR-4, the Improved Autoloading Standard. By following these standards, a class is always in a file by itself (there are not multiple classes declared in a single file), and it includes at least a namespace of one level.

 

Class names must be declared using StudlyCaps.

Here is an example of how a class file should look:

 style="margin: 0px; height: 98px; width: 978px;"><?php
namespace Thesis\PhpDevTools;
class MyClass
{
// methods here
}

 

Class Constants, Properties, and Methods

Under this standard, the term class refers to all classes, interfaces, and traits.

 

Constants

Class constants must be declared in all uppercase using underscores as separators.

 

Properties

The standards are fairly flexible when it comes to properties within your code. It’s up to you to use $StudlyCaps, $camelCase, or $under_score property names; you can mix them if they are outside of the scope of each other. So, if your scope is vendor, package, class, or method level, just be consistent within that given scope.

 

However, I would argue that it’s really best to find one and stick to it throughout all of your code. It will make the uniformity and readability of your code much easier as you switch through classes and such.

 

Methods

Method names must be declared using camelCase(). Let’s take a look at this standard in use:

 style="margin: 0px; width: 973px; height: 87px;"><?php namespace Thesis\PhpDevTools; class MyClass { const VERSION='1.0'; public $myProperty; public function myMethod() { $this->myProperty = true;
}
}

 

This wraps up all there is to the PSR-1 Basic Coding Standard. As you can see, it’s really straightforward, easy to follow, and easy to get a handle on just after your first read-through of it. Now, let’s look at PSR-2, which is the Coding Style Guide.

 

PSR-2 — Coding Style Guide

This guide extends and expands on PSR-1 and covers standards for additional coding structures. It is a longer read than PSR-1. This guide was made by examining commonalities among the various PHP-FIG member projects.

 

The full spec of this standard can be found at http://www.php-fig.org/psr/psr-2/. As with PSR-1, this is the current standard as of the time of writing. This section is meant to give you a general overview of the standard and some basic examples of following it. The standard is broken down into the following structure:

  • General
  • Namespace and Use Declarations
  • Classes, Properties, and Methods
  • Control Structures
  • Closures

 

General

In addition to the following rules, in order to be compliant with PSR-2, the code must follow all of the rules outlined in PSR-1 as well.

 

Files

All PHP files must use the Unix linefeed line ending, must end with a single blank line, and must omit the close ?> tag if the file only contains PHP.

 

Lines

The standards for lines follow a number of rules. The first three rules deal with line lengths:

  • There must not be a hard limit on the length of a line.
  • There must be a soft limit of 120 characters.
  • Lines should not be longer than 80 characters and should be split into multiple lines if they go over 80 characters.

 

The last two rules seem to contradict themselves a bit, but I believe the reasoning behind them is that, generally, 80 characters have been the primary standard for lines of code. There are many, many arguments about why it’s 80, but most agree it’s the most readable length across devices and screen sizes.

 

The soft limit of 120 is more of a visual reminder that you’ve passed 80 to 120, which is also easily readable on most IDEs and text editors viewed on various screen sizes but deviates past the widely accepted 80.

 

The no hard limit rule is there because there may be occasional scenarios where you need to fit what you require on a line and it surpasses both the 80- and 120-character limits.

 

The remaining line rules are:

  • There must not be trailing whitespace at the end of non-blank lines.
  • Blank lines may be added to improve readability and to indicate related blocks of code. This one can really help so that all of your code does not run together.
  • You can only have one statement per line.

 

Indentation

This rule states that you must use four spaces and never use tabs. I’ve always been a proponent of using spaces over tabs, and most any code editor or IDE can easily map spaces to your tab key, which just makes this rule even easier to follow.

Keywords and true, false, and null

This rule states that all keywords must be in lowercase as must the constants true, false, and null.

 

Namespace and Use Declarations

This standard states the following in regards to using namespaces and declarations of namespaces:

  • There must be one blank line after the namespace is declared.
  • Any use declarations must go after the namespace declaration.
  • You must only have one use keyword per declaration. So even though you can easily define multiple declarations in PHP separated by a comma, you have to have one per line, with a use declaration for each one.
  • You must have one blank line after the using block.
  • Classes, Properties, and Methods

In these rules, the term class refers to all classes, interfaces, and traits.

 

Classes

The extends and implements keywords must be declared on the same line as the class name. The opening brace for the class must go on its own line, and the closing brace must appear on the next line after the body of your class.

 

 Lists of implements may be split across multiple lines, where each subsequent line is indented once. When doing this, the first item in the list must appear on the next line, and there must only be one interface per line.

 

Properties

  • Visibility (public, private, or protected) must be declared on all properties in your classes.
  • The var keyword must not be used to declare a property.
  • There must not be more than one property declared per statement.

 

  • Property names should not be prefixed with a single underscore to indicate protected or private visibility. This practice is enforced with the Pear coding standards, so there is a good chance you’ve seen code using this method.

 

Methods

Visibility (public, private, or protected) must be declared on all methods in your classes, just like with properties. Method names should not be prefixed with a single underscore to indicate protected or private visibility. Just as with properties, there’s a good chance you’ve seen code written this way; however, it is not PSR-2 compliant.

 

Method names must not be declared with space after the method name, and the opening brace must go on its own line and the closing brace on the next line following the body of the method. No space should be used after the opening or before the closing parenthesis.

 

Method Arguments

  • In your method argument list, there must not be a space before each comma, but there must be one space after each comma.
  • Method arguments with default values must go at the end of the argument list.

 

  • You can split your method argument lists across multiple lines, where each subsequent line is indented once. When using this approach, the first item in the list must be on the next line, and there must be only one argument per line.

 

  • If you use the split argument list, the closing parenthesis and the opening brace must be placed together on their own line with one space between them.

 

Abstract, Final, and Static

  • When present, the abstract and final declarations must precede the visibility declaration.
  • When present, the static declaration must come after the visibility declaration.

 

Method and Function Calls

When you make a method or function call, there must not be a space between the method or function name and the opening parenthesis. There must not be a space after the opening parenthesis or before the closing parenthesis. In the argument list, there must not be a space before each comma, but there must be one space after each comma.

 

The argument list may also be split across multiple lines, where each subsequent line is indented once. When doing this, the first item in the list must be on the next line, and there must be only one argument per line.

 

Control Structures

  • There are several general style rules for control structures. They are as follows:
  • There must be one space after the control structure keyword.

 

  • There must not be a space after the opening parenthesis or before the closing parenthesis.
  • There must be one space between the closing parenthesis and the opening brace, and the closing brace must be on the next line after the body.
  • The structure body must be indented once.

 

The body of each structure must be enclosed by braces. This is definitely a very helpful rule because it creates uniformity and increases readability with control structures and not having braces, even though it’s allowed for single-line statements or when using the PHP alternate syntax, can sometimes lead to less readable code.

 

The next few rules are all basically identical for the following control structures. Let’s look at simple examples of each of them.

 

if, elseif, else

This builds on the previous rule and states that a control structure should place else and elseif on the same line as the closing brace from the previous body. Also, you should always use elseif instead of else if so the keywords are all single words. For example, this is a fully compliant if structure:

<?php
if ($a === true) {

} elseif ($b === true { } else {

}

 

switch, case

The case statement must be indented once from the switch keyword, and the break keyword or another terminating keyword (return, exit, die, etc.) must be indented at the same level as the case body. There must be a comment such as // no break when fall-through is intentional in a non-empty case body. The following example is a compliant switch structure:

<?php

switch ($a) {
case 1:
echo "Here we are.";
break;
case 2:
echo "This is a fall through";
// no break
case 3:
echo "Using a different terminating keyword"; return 1;
default:
// our default case
break;
}

 

while do while

The while and do while structures place the braces and spaces similarly to those in the if and switch structures:

<?php

while ($a < 10) {
// do something
}
do {
// something
} while ($a < 10);
for

 

The PSR-2 documentation shows that a for statement should be formatted as in the following example. One thing that is unclear based on what they have listed as well as based on their general rules for control structures is whether spaces are required between the $i = 0 and the $i < 10 in the example that follows.

 

Removing the spaces and running it against PHP Code Sniffer with PSR-2 validation will result in it passing validation, so this is left up to you according to your preference. Both of the following examples are PSR-2 compliant:

 

<?php
for ($i = 0; $i < 10; $i++) {
// do something
}
for ($j=0; $j<10; $i++) {
// do something
}

 

foreach

A PSR-2 compliant foreach statement should be structured as in the following example. Unlike in the for the statement, space is required if you are separating the key and value pairs using the => assignment:

 

<?php foreach ($array as $a) { // do something } foreach ($array as $key=> $value) {
// do something
}
try, catch (and finally)

 

Last in the control structure rules is the try-catch block. A try catch block should look like the following example. The PSR-2 standard doesn’t include anything about the final block (PHP 5.5 and later), but if using it, you should follow the same structure as in the try block:

<?php
try {
// try something
} catch (ExceptionType $e) {
// catch exception } finally {
// added a finally block
}

 

Closures

Closures, also known as anonymous functions, have a number of rules to follow for the PSR-2 standard. They are very similar to the rules that we have for functions, methods, and control structures.

 

This is mostly due to closures being anonymous functions, so the similarities between them and functions and methods make them close to identical. The PSR-2 rules are as follows:

 

Closures must be declared with space after the function keyword and space both before and after the use keyword. The opening brace must go on the same line, and the closing brace must go on the next line, following the body, just as with functions, methods, and control structures.

 

There must not be a space after the opening parenthesis of the argument list or variable list, and there must not be a space before the closing parenthesis of the argument list or variable list. Again, this is the same as with functions and methods.

 

There must not be a space before each comma in the argument list or variable list, and there must be one space after each comma in the argument list or variable list. 

 

Closure arguments with default values must go at the end of the argument list, just as with regular functions and methods. Here are a few examples of closures that are PSR-2 compliant:

<?php
// Basic closure
$example = function () {
// function code body
};
// Closure with arguments
$example2 = function ($arg1, $arg2) {
// function code body
};
// Closure inheriting variables
$example3 = function () use ($var1, $var2) { // function code body
};

 

Just as with functions and methods, argument lists and variable lists may be split across multiple lines. The same rules that apply to functions and methods apply to closures. Lastly, if a closure is being used directly in a function or method call as an argument, it must still follow and use the same formatting rules. For example:

<?php
$myClass->method(

$arg1,

function () {

// function code body

}

);

 

These rules conclude the PSR-2 Coding Style Guide. As you can see, they build on the basic rules set forth in PSR-1, and most of them build on rules from each other and share a number of commonalities.

 

Omissions from PSR-2

There are many elements that were intentionally omitted by the PSR-2 standard (although these items may eventually become part of the specification over time). These omissions include but are not limited to the following, according to the PSR-2 specification:

  • Declaration of global variables and global constants
  • Declaration of functions
  • Operations and assignment
  • Inter-line alignment
  • Comments and documentation blocks
  • Class name prefixes and suffixes
  • Best practices

 

Checking Coding Standards with PHP Code Sniffer

PHP Code Sniffer

Coding standards are a great thing to have, and online resources, such as the documentation provided by PHP-FIG on PSR-1 and PSR-2, help aid you in making the correct choices so that your code is compliant.

 

However, it’s still easy to forget a rule or mistype something to make it invalid, or maybe you’re part of a team and it’s impossible to do code reviews on everyone’s code to make sure all of their commits are compliant.

 

This is where it’s good to have a code validator that everyone can easily run, or that could even be incorporated into an automated process, to ensure all code is compliant with PSR-1 and PSR-2, or even with another coding standard that you choose.

 

A tool such as this exists, and it’s called the PHP Code Sniffer, also referred to as PHP_CodeSniffer by Squizlabs. PHP_CodeSniffer is a set of two PHP scripts.

 

The first is PHPCS, which, when run, will tokenize PHP files (as well as JavaScript and CSS) to detect violations of a defined coding standard. The second script is phpcbf, which can be used to automatically correct coding standard violations.

 

PHP_CodeSniffer can be installed a number of different ways. You can download the Phar files for each of the two commands, you can install using Pear, or you can install using Composer. Here are the steps for each of these installation methods.

 

1. Downloading and executing Phar files:

$ curl -OL https://squizlabs.github.io/PHP_CodeSniffer/PHPCS.phar $ curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcbf.phar

 

2. If you have Pear installed you can install it using the PEAR installer. This is done by the following command:

$ pear install PHP_CodeSniffer

 

3. Lastly, if you use and are familiar with Composer then you can install it system-wide with the following command:

$ composer global require "squizlabs/php_codesniffer=*"

Note The full online documentation for PHP CodeSniffer can be found at https://github.com/squizlabs/PHP_CodeSniffer/wiki.

 

PHP_CodeSniffer Configuration

There are a number of different configuration options and methods for configuring PHP_CodeSniffer.

 

Going through all of them is out of the scope of this blog, so the online documentation listed earlier is the best resource for finding all available options. However, let’s look at a few different options and how we can set them.

Note PHP_codeSniffer can also run in batch mode if needed.

 

Default configurations can be changed using the --config-set argument. For example, to change the default standard to check against to be PSR-1 and PSR-2 rather than the PEAR standard that PHPCS uses by default, it could be set this way:

$ PHPCS --config-set default_standard PSR1,PSR2 Config value "default_standard" added successfully

 

You can also specify default configuration options directly in a project using a PHPCS. XML file. This will be used if you run PHPCS in a directory without any other arguments. Here is an example:

<?xml version="1.0"?>
<ruleset name="Thesis_PhpDevTools">
<description>The default PHPCS configuration for Blog 3.</description>
<file>invalid.php</file>
<file>valid.php</file>
<arg name="report" value="summary"/>
<rule ref="PSR1"/>
<rule ref="PSR2"/>
</ruleset>

 

Within this file, the files to check are specified, as well as the rules we want to use. Multiple rules are specified using multiple <rule /> tags.

 

PHP_CodeSniffer Custom Standard

PHP_CodeSniffer Custom

In the event that you have your own standard, or have adopted most of PSR-1 and PSR-2 but decided to deviate from a rule here or there, you can create your own custom standard for PHPCS to use.

 

It is based on the PSR-1 and PSR-2 standard and just overrides the parts that you wish to deviate from. This is done using a ruleset.xml file, which is then used with PHPCS using the --standard argument, just as with any other coding standard.

 

At the very least, a ruleset.xml file has a name and a description and is formatted just as the PHPCS.xml file we created is. However, just having the name and description does not provide PHPCS with any instructions to override from an existing ruleset.

 

For this example, say we want to change the standard to not restrict method names to camelCase. This would be done with a configuration like this:

<?xml version="1.0"?>
<ruleset name="Thesis PhpDevTools CustomStandard">
<description>A custom standard based on PSR-1 and PSR-2</description>
<rule ref="PSR1">
<exclude name="PSR1.Methods.CamelCapsMethodName"/> </rule>
<rule ref="PSR2"/>
</ruleset>

 

With this ruleset, we see that all we needed to do was define a name for our rule, include the rulesets we wanted to base our standard off of, then specify the rule we wanted to exclude out of those rulesets.

 

Now, if we run the validation against our invalid.php file we’ll see the errors drop to four from five, as the method name violation is gone because our new standard doesn’t restrict it to camelCase:

 

$ PHPCS --standard=custom_ruleset.xml invalid.php
FILE: /Thesis/source/invalid.php
----------------------------------------------------------------------
FOUND 4 ERRORS AFFECTING 3 LINES
----------------------------------------------------------------------
3 | ERROR | Each class must be in a namespace of at least one level
| | (a top-level vendor name)
5 | ERROR | Class constants must be uppercase; expected VERSION but
| | found version
7 | ERROR | The var keyword must not be used to declare a property
7 | ERROR | Visibility must be declared on property "$Property"

 

PHP_CodeSniffer IDE Integration

As mentioned earlier, some IDEs such as PHPStorm and NetBeans have ways to integrate PHP_CodeSniffer directly within them.

 

The exact process of configuring it for these IDEs can change as their respective vendors release new versions, so we won’t cover this here. As of the time of this writing, the steps to set this up for PHPStorm are covered in the online documentation.

 

In my PHPStorm install, I have PHP_CodeSniffer configured and set to the PSR-1 and PSR-2 standards. With this configured, I get immediate feedback from PHPStorm if any of the code I’m writing deviates from these standards by way of a yellow squiggly line under the line of code that’s in violation.

 

You can also run validation on the file and see the inspection rules directly within PHPStorm.

 

Code Documentation Using phpDocumentor

Not all coding standards provide rules regarding code comments. For example, PSR-1 and PSR-2 do not have set comment rules in place. However, it is just as important to establish a standard that everyone working on your project will follow when it comes to comments.

 

Arguably, one of the most popular formats for PHP is the DocBlock format used in conjunction with providing information about a class, method, function, or another structural element.

 

When used in conjunction with the phpDocumentor project, you have the ability to automatically generate code documentation for your entire project and provide an easy reference for all developers to follow.

 

Another oft-used code documentation tool is PHPXref ( http://phpxref.sourceforge.net).

In general, PHPDocumentor and PHPXref have mainly two different targets:

  • phpDocumentor is mainly used for generating real documentation from the source in a variety of different formats.
  • PHPXref tool is mainly used to help the developer browse the code documentation of large PHP projects.

 

Installing phpDocumentor

phpDocumentor can be installed a few different ways. You can download the Phar file and execute it directly, you can install using Pear, or you can install using Composer. Here are the steps for each of these installation methods.

First of all, you want to check the PEAR prerequisites: http://pear.php.net/manual/en/installation.php

 

Download and execute the Phar files:

$ curl -OL http://www.phpdoc.org/phpDocumentor.phar

If you have Pear installed you can install it using the PEAR installer. This is done with the following commands:

$ pear channel-discover http://pear.phpdoc.org $ pear install phpdoc/phpDocumentor

Lastly, you can use Composer to install it system wide with the following command:

$ composer global require "phpdocumentor/phpdocumentor:2.*"

 

Using phpDocumentor

As previously mentioned, phpDocumentor should be used to document structural elements in your code. phpDocumentor recognizes the following structural elements:

  • Functions
  • Constants
  • Classes
  • Interfaces
  • Traits
  • Class Constants
  • Properties
  • Methods

 

To create a DocBlock for any of these elements, you must always format them the exact same way—they will always precede the element, you will always have one block per element, and no other comments should fall between the DocBlock and the element start.

 

A DocBlocks’ format is always enclosed in a comment type called DocComment. The DocComment starts with /** and ends with */. Each line in between should start with a single asterisk (*). The following is an example of a DocBlock for the example class we created earlier:

* Class ExampleClass
*
* This is an example of a class that is PSR-1 and PSR-2 compliant. Its only
* function is to provide an example of how a class and its various properties
* and methods should be formatted.
*
*/
class ExampleClass
{
const VERSION = '1.0';
public $exampleProp;
public function exampleMethod()
{
$this->$exampleProp = true;
}
}

 

As we can see with this example, a DocBlock is broken into three different sections:

 

Summary – The summary line should be a single line if at all possible and is just that—a quick summary of what the element is that we’re documenting.

 

Description – The description is more in-depth in describing information that would be helpful to know about our element. Background information or other textual references should be included here if they are available and/or needed. The description area can also make use of the Markdown markup language to stylize text and provide lists and even code examples.

 

Tags / Annotations – Lastly, the tags and annotations section provides a place to provide useful, uniform meta-information about our element. All tags and annotations start with an “at” sign (@), and each is on its own line.

 

Popular tags include the parameters available on a method or function, the return type, or even the author of the element. In the preceding example, we use the package tag to document the package our class is part of.

 

Note Each part of a DocBlock is optional; however, a description cannot exist without a summary line. Let’s expand on the preceding example and provide DocBlocks for each of the structural elements of our example class:

 

<?php namespace Thesis\PhpDevTools; /** * Class ExampleClass * This is an example of a class that is PSR-1 and PSR-2 compliant. Its only * function is to provide an example of how a class and its various properties * and methods should be formatted. class ExampleClass { /** * Class version constant */ const VERSION='1.0'; /** * Class example property * * @var $exampleProp */ public $exampleProp; /** * Class example method * * This method is used to show as an example how to format a method that is * PSR-1 & PSR-2 compliant. * * @param bool $value This is used to set our example property. */ public function exampleMethod($value) { $this->$exampleProp = $value;
}
/**
* Gets the version of our class
*
* @return string Version number
*/
public function classVersion()
{
return self::VERSION;
}
}

 

Now that we’ve expanded our example, you can see several unique tags being used as well as a sampling of how you can mix the three sections together as needed. For a full listing of the tags that are available for phpDocumentor to use, see the full phpDocumentor online documentation at http://www.phpdoc.org/docs/latest/index.html.

 

Running phpDocumentor

In addition to having nice, uniform code comments for your project when using phpDocumentor and the DocBlock format, you also now have the power and ability to effortlessly generate code documentation that will transform your comments into a documentation resource. Once you have phpDocumentor installed, it’s simply a matter of running it to produce this documentation.

 

There are just two of three command line options needed to produce your first set of documentation. The options are:

  • -d – This specifies the directory or directories of your project that you want to document.
  • -f – This specifies the file or files in your project that you want to document.
  • -t – This specifies the target location where your documentation will be generated and saved.

 

For this example, we’ll run it against our one example class from before:

$ phpdoc -f valid.php -t doc

 

Here, we’re telling phpDocumentor to run against the file valid.php and to save the documentation in a new folder called the doc. If we look in the new doc folder, we will see many different folders and assets required for the new documentation. You can view it by opening the index.html file, which is generated in a web browser. 

 

Non-structural Comments

Lastly, since phpDocumentor only has provisions for structural comments, it is recommended that you establish guidelines for your coding standard that extend to non-structural comments.

 

The Pear coding standard, for example, provides a general rule of thumb that is a great strategy to follow. Under their recommendations, you should always comment any section of code that you don’t want to have to describe or whose functionality you might forget if you have to come back to it at a later date.

 

It’s recommended that you use either the C-style comments (/* */) or C++ comments (//). It’s discouraged to use the Perl/Shell-style comments (#), even though it is supported by PHP.

Recommend