Using Mediawiki as a documentation tool

Some years ago, a coworker introduced me to using Mediawiki as a documentation tool and I got immediately hooked on it.

The biggest pro is that it’s very easy to use and install (millions of people already use it on Wikipedia) and you can use it for almost anything.
IT documentation is always changing and constantly needs to be updated, the harder it is to update – the less updated it will be, unfortunately.

I’ll start by listing some of the pro’s with using Mediawiki for your IT documentation:

  • Open source software, no cost for application, possible to install using L.A.M.P. or on a Microsoft platform using Screwturn Wiki.
  • One dedicated platform for documentation, you’re not forced to share environment (and search results) with e.g. other Sharpoint sites.
  • Fast and easy to create pages, update or rollback changes.
    Use standardized wikipedia code to create pages, or feel free to use the awesome WYSIWYG editing tool.
  • Built in version control for pages and LDAP / Microsoft Active Directory support. Restricted edit use for the IT department or give read permissions for everyone else.
  • Only text makes searches very useful and you always find what you are looking for. If you can’t find it someone probably haven’t documented it yet. No more having to download and open Word documents to find the correct documentation.
  • For offline use – Install a 3:rd party plugin for exporting to offline pdf-files.

The flexibility of using it for documentation however requires you to set some ground rules not unlike those used by Wikipedia.

Here are some rules I’ve found useful when implementing mediawiki:

  • One page per server, application or area of documentation. Never split it up in several pages.
  • Use descriptive page names and avoid names that can have multiple meanings.
  • Create templates (also stored in the wiki) for Servers and Applications that can be used when creating new pages.
  • Use headers to create a hierarchy for your page. Very useful when linking in to larger pages.
  • Use capital letters for Server names, makes them easier to identify.
  • Use the server & application pages for logging recent changes. Type in what you did and when you did it to make troubleshooting easier.
  • Force users to search before they create a page, to avoid duplicates with similar names.
  • Only allow upload of images (like screenshots or graphs) to the wiki. Never allow pdf, word or excel files to be uploaded. The wiki should not be a document store.
  • Assign one or more mediawiki evangelists who help out with the initial design of the wiki, they can also help out with questions from other users.

Software requirements for installing could be Ubuntu (or your favorite Linux distribution), MySQL, Apache and Mediawiki.

Use mysqldump to backup your MySQL database to local disk and have your backup software do a backup of the files on the server. That way you can easily restore your files to a new server when needed.

Good luck with your wiki!

Posted in Linux, Mediawiki, Tips | Leave a comment

Distribution database causing high disk IO, many reads / sec.

Playing around with VMWare performance data, i found a server with unusual high disk IO, and by that I mean registering around 300 MB/sec all the time.
This server was a SQL Server 2008 R2 with some tables being replicated to a second server. Looking closely I noticed that the distribution database was responsible for this, with all 300 MB /sec being reads and no writes.
The distribution db was quite large as well, around 30 GB. I guessed the replication job had some problem with cleaning out old data, and having to constantly going through those 30 GB over and over again.

Looking at the Replication Monitor and the settings for each of the publications.

The cause for this wasn’t the “Subscriptions never expire, but they can be deactivated until they reinitialized” setting, but instead the “Snapshot always available” and “Allow anonymous subscriptions” setting. You can see the settings when choosing properties on your replications.

Both were set to true, and that’s why the distribution database had so many rows in its tables. They were never deleted and the SQL Agent jobs kept going through the records over and over again.

Changing them both to false, and letting the “Distribution clean up: distribution” SQL Agent job do it’s work cleaned out the distribution database. After that the disk IO went back to normal.

USE <my replicated database>
EXEC sp_changepublication @publication = '<my publication name>', 
@property = 'immediate_sync', @value = 'false'
EXEC sp_changepublication @publication = '<my publication name>', 
@property = 'allow_anonymous', @value = 'false'

The solution to my problem was found here

Don’t forget to shrink your distribution database afterwards!

Posted in DBA, SQL Server | Leave a comment

Review of Ola Hallengren’s Scripts

Ola Hallengren is a Swede (I presume) who have developed these nifty scripts for performing backups, index maintenance and database checks.

I’ve been using them for some years now and really favour them opposed to the built in Maintenance jobs in SQL Server, which are less customizable and quite blunt.


  • Highly customizable. You can decide what to do, how and when to do it.
  • Standardized. They have been along for some time and feedback from users have led to a very robust product.
  • Open Source. Free to modify, update and even sell if that is what you would like to do.
  • Fast feedback from mister Ola when needed, I got a reply just a day later when I had some improvement suggestions.


Not really any con’s but some small things that could be improved.

  • No prefix for tables, stored procedures and SQL Agent jobs. A default prefix, like OlaH_ or similar would help group the jobs, sp’s and tables together keeping the database tidy and easy to locate.
  • No versioning. A version number in the stored procedure would help identify which version of the script you are running and what features you have. The suggested use of checksum doesn’t help with this.

All in all – I highly recommend the use of Ola Hallengrens scripts!

Posted in DBA | Leave a comment

SQL Server 2008 R2 failed to install

I recently got this error when trying to install SQL 2008 R2 Developer edition (64 bit) on my Windows 7  Enterprise (64 bit) workstation:

SQL Server setup has encountered the following error:

msigetproductinfo failed to retrieve productversion for package with product code = ‘{633F3A7E-471D-4C08-A643-C184A2EE19AB}’. Error code: 1608..

Found the solution to this on the web

I switched the characters in the first part of the GUID string. 633F3A7E became E7A3F336 and searching for this string in my registry using regedit.exe, located this key that I then deleted (make sure you backup anything you change in your registry prior this!)


After doing this, the installation went ok!

Posted in SQL Server | Leave a comment

Change default Collation in SQL Server 2005

I recently had some trouble selecting the proper Collation/Sort order when setting up a new SQL 2005 installation. There was a small number of Collations/Sort Orders but not the one I needed.

The official webpage for that didn’t give much but I managed to find a command line syntax that fixed the problem, and I was able to specify Collation that wasn’t available during installation.

The syntax goes like this:

start /wait D:\SQLInstall\setup.exe /qb INSTANCENAME=MSSQLSERVER 
SAPWD=mysapassword SQLCOLLATION=SQL_Latin1_General_CP1_CI_AS

If you want to perform the rebuild without any output, run it with /qn instead of /qb


Posted in SQL Server | Leave a comment

Query WSUS server

On a regular basis we patch our Windows Servers and it would be nice to get a realtime view of just how patched our servers are – which servers are not yet patched and how many patches are about to get applied.

WSUS (Windows Server Update Services) stores it’s data in a SQL Server database which makes this a easy task.
The query I used is like this:

SELECT tbComputerTarget.FullDomainName as [ServerName], 
tbComputerTarget.IPAddress as [IPAddress],
count(tbComputerTarget.FullDomainName) as [MissingPatches],
tbComputerTarget.LastSyncTime as [LastSyncTime]
FROM tbUpdateStatusPerComputer (nolock) 
INNER JOIN tbComputerTarget (nolock) 
ON tbUpdateStatusPerComputer.TargetID = tbComputerTarget.TargetID
WHERE (NOT (tbUpdateStatusPerComputer.SummarizationState IN ('1', '4'))) 
GROUP BY tbComputerTarget.FullDomainName,
tbComputerTarget.IPAddress, tbComputerTarget.LastSyncTime

After getting this into a webpage, my colleagues can now easily get a live view of the status in our serverpark.

Posted in SQL Script, Tips, WSUS | Leave a comment

Attaching database that wasn’t detached


I recently had an interesting issue where a Virtual SQL Server was deleted from VMWare, just like that with no nice shutdown and certainly no last backup & detach database commands. We were fortunate to locate and mount the data and log disks but the SQL/OS partitions were lost, so there was no way to get it back online again.

I was asked to get the databases back up again, so I got a new virtual SQL Server with the data and log disks mounted.

My first attempt was to try attaching them, just like I would with a detached db, but Microsoft clearly states in the BOL that an operation like that is not possible. In order to attach a datafile, it must first have been detached.

Knowing that the datafiles should be ok I started testing ways to trick the SQL Server into accepting my files.
The solution was to create an empty database, with the same name, number of data and log files and also roughly the same size (I chose a little bit larger) as the ones I wanted to attach.
When created, I took the db Offline. Switched the data and log files with the ones i wanted to use and then took it Online again.

This seems to have worked for me, no errors in errorlog, dbcc concistency checks worked out fine and when presenting the databases to the end users they were glad to find everything just as how it was left.

This was on a 2008 R2 Developer edition on Windows Server 2008 R2.

Posted in DBA, SQL Server, VMWare | Leave a comment

DKIM + MailMan = Trouble

I have a couple of mail lists running on a virtual Linux box. I set them up a while ago and they kept on running for couple of years until a thing called DKIM came in through the cat door and. Unfortunately it took a while until I noticed something was wrong and even longer on how to fix it.

DKIM is a way to fight spam by automatically signing emails with a signature that tells mail servers that the email came from the right domain.

I set up Mailman, the application I use for my mail lists, to send out emails like it came from the mail list even though one of the members sent it. The reason is that when another member replies, it will be automatically sent back to the whole list instead of just one member.

Making an email look like it came from a different address will of course mess with the DKIM signature emails may have when they reach my server and that’s what happened to the mystery emails that never arrived.

Other mail servers out there discarded these emails as spam and sent me cryptic messages in my mail log.

I tried to fix this by enabling DKIM (dkim milter) on my server and adding the necessary key to my DNS entries but after a couple of tries and waiting for the DNS to reload I settled for letting Mailman just remove DKIM signatures on all emails. Voila – it works!

# Add this in your mailman config: 
# /etc/mailman/

And I’ll save implementing mailman + DKIM for a rainy day.

Posted in Linux | 1 Comment

Memory running low on SQL Server

Looking at performance counters for a SQL Server, I noticed that the Page Life Expectancy (PLE) counter was very low for this server. A quick description of PLE is how long data is stored in the memory of SQL Server before it is replaced by new data.

Current PLE value can be viewed using this query, and the value you get is in seconds. You should grab this value several times during a longer period to get an idea of how you SQL Server performs.

SELECT [counter_name],[cntr_value] FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Manager%' AND
[counter_name] = 'Page life expectancy'

Whitepapers from Microsoft suggests that PLE should be above 300 on average, but I guess it really depends on the kind of application you have.

My server is a standard OLAP database server but the PLE is around 1 – 20 seconds throughout peak hours, which i think is really bad. Users also experienced sluggish performance so something has to be done – adding more memory. Given that it didn’t have that much memory to start with, and the size of the data it was a no brainer.


Posted in SQL Server | Leave a comment

New MySQL database

Here is the syntax for creating a new db in MySQL with a login and full permissions for that user. Perfect for setting up that L.A.M.P. :-)


CREATE USER 'verity_user'@'localhost' IDENTIFIED BY 'password';

GRANT ALL ON verity_db.* TO 'verity_user'@'localhost';

Posted in MySQL | Leave a comment