LVM Thin Provisioning
The previous blog post I wrote about LVM described the foundations this great technology is based on. But as I already mentioned there, the real purpose of that blog post was to provide the basic "common ground" for this blog post. A blog post focusing on LVM Thin Provisioning which is a really great technology that gets probably 10 % of the focus and glory it deserves. So what is this so amazing thing?
Before anything serious, here again comes the disclaimer: I'm not saying everything written here is 100% correct, precise and complete. This is just how I understand and see things to the extent I care about them.
Let's start with two real life problems:
- Imagine you are a storage administrator for a large group of users. You want them to think that you are a really generous storage administrator for giving every one of them 100 GiB of space to use. However, you have 1000 users and only 10 TiB of disk space. It's just impossible to split 10 TiB of space into 1000 chunks each 100 GiB big. Nevertheless you know that basically none of the users are going to make use of all the space you want them to think they have. So those 10 TiB might be enough space for all users data. The problem is that one doesn't know in advance which users are going to need/use what amounts of space. If only it was possible to create a pool from those 10 TiB of disk space and assign it to users on demand when they actually need it, right? And put the no longer used space back into the pool for a later reuse (potentially by some other users).
- You have some data stored in a file system and want to be able to get the state of the file system back to some given point in time - even though the file system is being used and modified. In other words, you want to create a snapshot of the device the file system is stored on and you want to have only the modified blocks take space from your disks (i.e. a copy-on-write snapshot). But then you need to capture the state at some other moment in time. And this all happens again and again. Thus, you need to create snapshots of snapshots of snapshots,... You also want to be able to start over from some state (snapshot), continue from it and preserve the original "branch" of states (snapshots), but you don't want all the snapshots to be modified whenever you modify the original state they all originate from.
Both of the above cases can easily be covered by LVM Thin Provisioning.
The title of this section should come as no surprise to anybody who read my previous blog post about LVM. As with any other feature provided by LVM, the real magic of thin provisioning is implemented in Device Mapper and LVM is just maintaining the metadata, properly setting up and tearing down DM devices, etc.
A traditional (non-thin) LV is a block device consisting of extents allocated from its VG. Thin LV is a block device consisting of chunks allocated from its pool. So how do these two differ? Remember how a non-thin LV looks like as a DM device? It is using one or more mappings of contiguous segments on physical volumes (PVs). A thin LV, on the other hand, is a contiguous space of pointers to chunks in its pool.
So the space can be "thinly provisioned" on demand. The file system sees a contiguous space just like on any other block device. However, at first, nothing is really allocated and all the pointers point to no real chunks in the pool (on PVs). Once the file system writes to the thin LV, one or more chunks from the pool are allocated and the pointers are set to point to these chunks. That's how it's possible to have 1000 block devices each 100 GiB big on a 10 TiB disk space. It is an illusion similar to how virtual memory provides full address space to every process like it was the only process running on a machine with the size of RAM covering the whole address space.
Similarly when a snapshot is created, a new block device, which is a contiguous segment of pointers, is created with all the pointers pointing to the same chunks as in the origin. So the origin and the snapshot devices are identical. But only until the first write to any of them happens at which point the Copy-On-Write (COW) operation happens. New chunk is allocated for the modified chunk, data is written to the new chunk and the pointer in the modified device is changed to point to the new chunk. If the change is smaller than the chunk, the original data is copied over to the new chunk together with the modified data. If the change is bigger than a single chunk, multiple new chunks are allocated and written to with multiple pointers being changed, of course.
Such snapshots work faster and with a smaller overhead than the "old" snapshots provided by the snapshot DM target. The non-thin DM snapshots have a segment of space available for the COW data (like a "personal" pool). When writes to the origin happen, the original data is copied over to the COW space. When writes to the snapshot happen, data also goes to the COW space. The snapshot then has a lookup table allowing it to provide the right piece of data on read operations. However, this is slower and less optimal than the thin provisioining implementation and the snapshot can easily run out of it's reserved COW space whereas in case of thin snapshots chunks for the COW data are allocated on demand from the pool shared with the origin. And due to single and flat metadata space (of pointers), it's not a problem to create snapshots of snapshots of snapshots,... with thin provisioning. Non-thin snaphots can only go one level deep - IOW, a snapshot cannot serve as an origin for another snapshot.
Last but not least, since a thin snapshot is a thin LV like any other, it doesn't need to origin to exist for it to be usable. It's pointers still point at the same chunks which are not freed because there's still something pointing at them (IOW, their reference count is > 0).
It has been mentioned many times in the above section that the chunks are allocated from a pool, a thin pool more precisely. It should be no surprise that a thin pool is a DM device. Could it be just a simple contiguous space split into chunks of a given size that are allocated by the thin LVs (DM devices)? The answer is actually similar to the answer for the question why LVM doesn't use 100 % space of PVs. Because just like a VG it needs to keep track of the mappings (tables) its LVs consist of, the thin pool needs to know which chunks are being used by which thin LV. And this information needs to be persistent for the thin LVs to be (re)constructed on (re)boot. Thus there needs to be a metadata part of the thin pool (containing the pointers) as well as a data part (containing the actual data chunks). I don't want to go into details of the DM thin provisioning API/CLI because this post focuses on LVM thin provisioning which makes things a lot easier, but this one example of how a thin pool is created is quite useful:
dmsetup create pool \ --table "0 20971520 thin-pool $metadata_dev $data_dev \ $data_block_size $low_water_mark"
As we can see separate devices have to be provided for the metadata and data parts (for now let's ignore the $data_block_size and $low_water_mark parameters, we will get to them later). This may seem overly complicated and even hard to achieve - does one need two disks one for data and another one for metadata? Don't forget that those two devices can be any block devices, in particular some other DM devices. And since the pool can be over-provisioned as was described in the 1. use case above, one may need to extend the data part as more and more chunks are actually allocated from the pool. Similarly when many snapshots are created there may not be enough space for the metadata (pointers) and the metadata part of the pool needs to be enlarged. Thus it's extremely useful to use DM devices as the data/metadata devices due to their flexibility (remember that one can reload the device's table while the device is being used). In advanced use cases, it also makes sense to ensure the data and metadata devices have different properties - metadata needs much less space than data, but need to be read/written often and fast while being more valuable (a few wrong sectors of metadata may cause big data losses).
And what about the other two parameters? $data_block_size is the size of the allocation chunks. This is a very important value which can significantly affect performance of the pool. The bigger the chunks are, the less pointers and operations on them are needed when new space is allocated, etc. But bigger chunks mean smaller amount of chunks is shared between snapshots and their origins because a small change in the big chunk requires a new chunk being allocated for the COW operation. So generally if many snapshots are expected to be created, small chunk size should be used. Otherwise bigger chunk size means less overhead. The allowed range for the $data_block_size value is 64 KiB - 1 GiB.
As was mentioned above, in some cases the thin pool may run out of data or metadata space. If the data space is exhausted then, based on the configuration, I/O operations are either queued or failing. If metadata space is exhausted, the pool will error I/O until the pool is taken offline and repair is performed to fix potential inconsistencies. Moreover, due to the metadata transaction being aborted and the pool doing caching there might be uncomitted (to disk) I/O operations that were acknowledged to the upper storage layers (file system) so those layers need to perform checks/repairs too. To prevent such situations, the DM triggers an event which may be monitored by a (userspace) daemon that should assure that the data or metadata part is extended. This happens when the amount of free space drops below a certain level - $low_water_mark for the data part and a kernel built-in value for the metadata part.
As described above, running out of metadata space is a much bigger issue than running out of data space. Of course, it's not possible to run out of data space unless the thin pool is over-provisioned (incl. snapshots). But what is the right amount of metadata space? Just like with chunk size, there's no single great answer as the right value depends on the use case. An important rule is that the administrator should always make sure that the thin pool can grow when needed and all mechanisms for it to happen are properly configured and setup. The allowed range of metadata space sizes is 2 MiB - 16 GiB and the LVM documentation (lvmthin(7)) suggests 1 GiB as the default value which should be okay for most of the use cases. However, the DM documentation suggests 48 * $data_dev_size / $data_block_size (but at least 2 MiB).
Wondering what a section titled with an SSD-related technology may be doing in a post about thin provisioning? TRIM/discard is an operation where the OS informs the underlying storage about which sectors are not being used. In normal mode of operation, file systems just mark sectors of deleted files as unused for themselves and later reuse them. However, the write operations on SSDs are much faster and more optimal if the cells are clean compared to overwrites. Thus it's useful to tell the SSD which sectors are actually erased so that it can clean them. This also improves wear levelling.
Okay, so what does all this have to do with thin provisioning? Well, just like the SSD by default has no idea which sectors the file system doesn't actually use (erased sectors) the same applies to the thin pool. So the file system's reads/writes use the allocated chunks or allocate new, but never put the chunks back into the pool. By doing TRIM/discard (e.g. with the fstrim utility), the file system informs the thin pool about which sectors are not being used (erased) and the particular chunks can be put back into the pool. This way with the 1. use case described at the beginning of this post (shared storage pool) it's not a problem if even many users from time to time fill in all their 100 GiB of space with data they later remove. Well, as long as they all don't do it at the same time, of course.
Hopefully the blog post has managed to describe the basics of thin provisioning till this point. It should now be clear what a thin pool is, how thin LVs (devices) are created in/on top of/from it and how thin LVs and snapshots differ from non-thin LVs and snapshots, respectively.
It should also be clear (from the previous blog post) that while DM is enough to make this all work, without LVM there's no metadata about the devices and their mappings (tables) so one has to somehow (manually) manage this information and take care that devices are being setup properly. That's cumbersome and a waste of time and effort because LVM can take care of all this and in a much more reliable way.
So how does the work with LVM thin provisioning look like? Let's take a look at a few examples. At first, we of course need a VG (all the examples again use a VM with two extra 1GiB disks for testing):
# vgcreate test /dev/sda /dev/sdb
Now I could use a single command to create a thin pool with a thin LV like this:
# lvcreate -n thlv1 -L1500M -V1G --thinpool test/pool
where the -L gives the size of the pool and -V gives the size of the thin LV. But let's go through the steps that happen behind the scenes of the above command so we take a more detailed look at how LVM Thin Provisioning works.
So first we need a thin pool so that we can create thin LVs in/on top of it. As was mentioned above, the thin pool consists of two separate parts - the data part and the metadata part. While it is possible to leave the metadata part creation on LVM (see lvmthin(7)), it's nice to have a greater control over things and we can do some nice tricks like this:
# lvcreate -n pool_meta --type raid1 -m1 -L10M test # lvcreate -n pool -L1500M test
which creates two LVs - a 10MiB RAID1 LV and a 1500MiB linear LV - both using disks /dev/sda and /dev/sdb. As explained above, metadata is to some extent more valuable so it makes sense to have them in two copies (hence RAID1 ). On the other hand we want as much space as possible for the data so it's nice to have a simple way to span the pool LV over both disks. There's some more space in the VG (although it's not exactly 2 GiB - 1500 MiB - 2*10 MiB, right?), but that space is required for two reasons. One of them was already mentioned - it's a good practice to leave some space in the VG for the thin pool metadata/data parts to grow when needed. The other one is that when the thin pool is created, LVM creates an extra pmspare (Pool Metadata spare) LV with the same size as the metadata LV for repair/recovery/... operations on metadata.
|||or we could of course use the older mirror target with the mirrorlog|
Here's how the thin pool is created by conversion of the pool LV using the pool_meta LV and chunk size 512 KiB :
# lvconvert --type thin-pool --poolmetadata pool_meta -c 512K test/pool_data
|||LVM warns me that it's not a good idea to have a big chunk size in combination with zeroing (which I obviously have turned on by default) which could lead to degraded performance, but let's not bother with that for now.|
We can check with the lvs command that the thin pool was really created:
# lvs test -olv_name,size LV LSize pool_data 1.46g
but what's probably more interesting is the output with the -a option showing all LVs including the internal ones:
# lvs test -a -olv_name,size LV LSize [lvol0_pmspare] 12.00m pool 1.46g [pool_tdata] 1.46g [pool_tmeta] 12.00m [pool_tmeta_rimage_0] 12.00m [pool_tmeta_rimage_1] 12.00m [pool_tmeta_rmeta_0] 4.00m [pool_tmeta_rmeta_1] 4.00m
Now we can see the internal LVs of what used to be the RAID1 pool_meta LV (now renamed to pool_tmeta) as well as the extra lvol0_pmspare LV explained above.
So now we have a thin pool. Cool! But we cannot really use the pool itself for anything useful. We of course need some thin LVs:
# lvcreate -n thlv1 -V1G --thinpool test/pool test # lvcreate -n thlv2 -V1G --thinpool test/pool test
Note the -V option being used instead of the -L option. -V stands for virtual size because as we already know thin LVs don't use any space right after they are created and we only limit their maximum size. But of course the space of the maximum size can only be used as long as there's enough space in the pool. Which is not likely going to happen since we have two 1GiB thin LVs in a 1.46GiB pool. In this case LVM kindly warns me about the fact that the sum of the thin LVs is even bigger than the whole VG which means that even if the pool is grown automatically, it can never grow enough to provide space for all the thin LVs if they need to use all of what was assigned to them. It's an important warning, but nothing more. The LVs are created just fine:
# lvs test -olv_name,size,data_percent,metadata_percent LV LSize Data% Meta% pool 1.46g 0.00 0.36 thlv1 1.00g 0.00 thlv2 1.00g 0.00
And this time I added two extra columns to the output so that we can see that no data/chunks from the pool is/are allocated so far, but there's already some small percent of the metadata being used - of course, the pool has to know that there are two thin LVs in it with some given virtual sizes.
I could continue creating more and more thin LVs in the pool and overcommit it to whatever ratio needed. A great thing is that a thin LV is a regular block device like any other. So if you for example need to find out how something (e.g. a file system) would behave on a really big device you don't have space for, thin provisioning is a great tool  for this.
|||One can use sparse files and loop devices, but thin LVs allocate only the really used chunks from the pool no matter where they are and prevent many issues sparse files and loop devices introduce.|
So far we have covered the 1. use case from the introduction of this post. What about the 2. use case (snapshots)? Here's how a thin snapshot can be created:
# lvcreate -s -n thlv1_1 test/thlv1
The same way as a non-thin snapshot except that the thin snapshot doesn't need to know any size it can use. I may go ahead and create a few more snapshots in the same way:
# lvcreate -s -n thlv1_1_1 test/thlv1_1 # lvcreate -s -n thlv1_1_2 test/thlv1_1
and then take a look at the result:
# lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.00 0.36 thlv1 Vwi-a-tz-- 1.00g 0.00 thlv1_1 Vwi---tz-k 1.00g thlv1 thlv1_1_1 Vwi---tz-k 1.00g thlv1_1 thlv1_1_2 Vwi---tz-k 1.00g thlv1_1 thlv2 Vwi-a-tz-- 1.00g 0.00
this time with two more extra columns showing the attributes and origins (if any). As can be seen, the snapshots are not active (- instead of a) and they have the s(k)ip activation flag set which means that when the VG is activated or lvchange -ay is called on them, they are not activated. A reason for this is that the snapshots are really the same block devices as their origins and having two identical devices in a system at once can cause issues (e.g. if there's a file system on the origin, the snapshot introduces a second file system with the same UUID). The flag can be ignored/overriden or unset with the following commands (respectively):
# lvchange -ay -K test/thlv1_1 # lvchange -kn test/thlv1_1 # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.00 0.36 thlv1 Vwi-a-tz-- 1.00g 0.00 thlv1_1 Vwi-a-tz-- 1.00g 0.00 thlv1 thlv1_1_1 Vwi---tz-k 1.00g thlv1_1 thlv1_1_2 Vwi---tz-k 1.00g thlv1_1 thlv2 Vwi-a-tz-- 1.00g 0.00
Wonder what happens if I now try to write to the origin thin LV? Let's try it out:
# dd if=/dev/urandom of=/dev/test/thlv1 bs=2M count=1 # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.13 0.39 thlv1 Vwi-a-tz-- 1.00g 0.20 thlv1_1 Vwi-a-tz-- 1.00g 0.00 thlv1 thlv1_1_1 Vwi---tz-k 1.00g thlv1_1 thlv1_1_2 Vwi---tz-k 1.00g thlv1_1 thlv2 Vwi-a-tz-- 1.00g 0.00
Of course there are now some chunks allocated for the thlv1 LV from the pool. And also the use of metadata grew because of the snapshot. Let's now verify that the snapshot has the original (zeroed) data and the origin has some random data in it:
# dd if=/dev/test/thlv1_1 bs=10 count=1|od -x 0000000 0000 0000 0000 0000 0000 # dd if=/dev/test/thlv1 bs=10 count=1|od -x 0000000 6f7d 5f3c 49a5 3a6f 64c9
And since I have the snapshot activated, I can write some other random data into it too and check the new results:
# dd if=/dev/test/thlv1_1 bs=10 count=1|od -x 0000000 2e92 82bf 496f 8dfe 8286 # dd if=/dev/test/thlv1 bs=10 count=1|od -x 0000000 6f7d 5f3c 49a5 3a6f 64c9
If I now activate the snapshot of the thlv1_1 snapshot, I can see it still has the original data:
# dd if=/dev/test/thlv1_1_1 bs=10 count=1|od -x 0000000 0000 0000 0000 0000 0000
So it all works as expected. But it's not for free, right? The data has to be stored somewhere and so have the pointers. Thus there's a (small) change in the thin pool's usage:
# lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.27 0.42 thlv1 Vwi-a-tz-- 1.00g 0.20 thlv1_1 Vwi-a-tz-- 1.00g 0.20 thlv1 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1_1 thlv1_1_2 Vwi---tz-k 1.00g thlv1_1 thlv2 Vwi-a-tz-- 1.00g 0.00
Okay, so now I have a few snapshots and it should be obvious that one can activate the snapshot and e.g. start using it instead of the origin. However, those two are different devices with different UUIDs, different paths and names, etc. If I had a snapshot of a thin LV mounted as / and wanted to return to state of it when the snapshot was created, I'd have to change a lot of configuration to make the system boot with the snapshot mounted as / instead of the origin LV. But there's a better solution for this case. It's possible to merge the snapshot (IOW, revert origin to the snapshot):
# lvconvert --merge test/thlv1_1 Merging of thin snapshot test/thlv1 will occur on next activation of test/thlv1_1. # lvchange -an test/thlv1 # lvchange -ay test/thlv1 # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.13 0.39 thlv1 Vwi-a-tz-- 1.00g 0.20 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1 thlv1_1_2 Vwi---tz-k 1.00g thlv1 thlv2 Vwi-a-tz-- 1.00g 0.00 # dd if=/dev/test/thlv1 bs=10 count=1 2>/dev/null|od -x 0000000 2e92 82bf 496f 8dfe 8286
When merging the snapshot back into the origin, LVM tells me it will be merged on next activation of the snapshot. However, I think this might actually be a bug, because AFAICT, the merge happens when the origin is activated. After doing so, the snapshot is gone, but the origin really contains what has been written to the snapshot (don't forget there was some random data written to it). If I really wanted to original contents, I'd have to merge the thlv1_1_1 snapshot too, either before or after the merge of the thlv1_1 snapshot. It should be obvious how the merge before would work, but what about the merge afterwards? The output of the lvs command gives the answer - the origin of the thlv1_1_1 snapshot is the thlv1 LV now. Which makes perfect sense as the original origin (heh) was merged into it.
What happens if I remove the origin now?:
# lvremove test/thlv1 Do you really want to remove active logical volume test/thlv1? [y/n]: y Logical volume "thlv1" successfully removed # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.00 0.36 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1_1_2 Vwi---tz-k 1.00g thlv2 Vwi-a-tz-- 1.00g 0.00 # lvchange -ay -K test/thlv1_1_1 # dd if=/dev/test/thlv1_1_1 bs=10 count=1|od -x 0000000 0000 0000 0000 0000 0000
The LV is removed just fine with no problems. And since it's gone, the snapshots have no origin. Their flags are not modified so they are still skipped on activation, but other than that they are just thin LVs like any other. And of course the chunks and metadata required for the thlv1 and the snapshots was freed.
As the last example, let's take a look at what TRIM/discards does in reality:
# mkfs.xfs /dev/test/thlv2 # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.60 0.36 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1_1_2 Vwi---tz-k 1.00g thlv2 Vwi-a-tz-- 1.00g 0.88 # mkdir /tmp/tst # mount /dev/test/thlv2 /tmp/tst # dd if=/dev/urandom of=/tmp/tst/test.bin bs=10M count=1 # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 1.27 0.36 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1_1_2 Vwi---tz-k 1.00g thlv2 Vwi-aotz-- 1.00g 1.86 # rm /tmp/tst/test.bin # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 1.27 0.36 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1_1_2 Vwi---tz-k 1.00g thlv2 Vwi-aotz-- 1.00g 1.86 # fstrim -v /tmp/tst /tmp/tst: 1018.4 MiB (1067851776 bytes) trimmed # lvs test -olv_name,lv_attr,size,data_percent,metadata_percent,origin LV Attr LSize Data% Meta% Origin pool twi-aotz-- 1.46g 0.57 0.36 thlv1_1_1 Vwi-a-tz-k 1.00g 0.00 thlv1_1_2 Vwi---tz-k 1.00g thlv2 Vwi-aotz-- 1.00g 0.83
A newly created file caused some chunks from the pool to be allocated as expected (so did the file system creation that writes inodes,...). However, when the file was removed, the chunks weren't freed and put back into the pool. That's because the file system didn't tell LVM that the sectors are no longer in use. The fstrim utility does that and so after running it, the pool reports smaller usage again. And it looks like the file system wrote something extra it later removed even in its creation phase because the pool usage is even smaller than right after the file system was created.
Hopefully the above sections demonstrate that LVM Thin Provisioning is a really nice technology providing some very useful features. And what's more important, it provides all its features in a very robust and reliable way. I have been using LVM ThinP for more than 2 years now while doing a lot of more or less crazy things with it and I haven't lost a single byte of my data. I'm even using LVM ThinP on my external disk dedicated for backups even though the LVM team keeps telling me that it's not really a great idea. But you know what? I haven't had any problem with that neither. Moreover, it helped me recover some removed/rewritten data because I've been creating snapshots before every backup operation. It's just as easy as:
lvcreate -n others_$(date +"%Y%m%d") -s backup/others
and the same approach can be used if one has a thin LV mounted as / and doesn't particularly believe none of the updates is going to break things.
Anyway, if you think the above described features are cool and you have a use case for them, don't hesitate to give LVM ThinP a try! It might be a little bit harder to understand and work with than some other similar choices, but it will  never let you down! The lvmthin(7) man page is a great guide full of nice tips and everybody should be able to setup LVM thin provisioning based on it. There are also high-level tools that support LVM ThinP ranging from the Anaconda installer over the SSM  command line tool to blivet-gui, a great GUI storage configuration tool, to Cockpit, a user-friendly system management tool. LVM Thin Provisioning is also officially supported in Red Hat Enterprise Linux 6 and 7.
|||System Storage Manager, in the most recent version/upstream|
|For anonymous comments click into the Start the discussion entry, fill in your name (or a nick) and check the I'd rather post as a guest checkbox. Anonymous comments with links might require approval by an administrator.|