ZEVO QuickStart Guide

14
QuickStart Guide Community Edition

description

ZEVO ZFS

Transcript of ZEVO QuickStart Guide

  • QuickStart Guide

    Community Edition

  • TABLE OF CONTENTS

    Introduction 2

    Licensing 2

    System Requirements 2

    Warnings 2

    Installing 3

    Installing ZEVO 3

    Uninstalling ZEVO 3

    Using ZFS 3

    Command Line Interface 3

    Disk Device Names 3

    Creating ZFS Storage Pools 6

    RAID-Z2 Example 7

    Converting a Single Disk Pool into a Mirror 8

    Mirrored Pair Example 9

    Advanced Format Drives 10

    Backing Up 10

    Logs and Notifications 11

    Troubleshooting Tips 12

    Acknowledgements 13

  • ZEVO QuickStart

    INTRODUCTIONThank you for downloading the ZEVO Community Edition. ZEVO leverages the Mac OS X storage plug-in

    architecture and seamlessly integrates the powerful ZFS file system. This QuickStart document will explain how

    to install ZEVO and provides a few examples to help get you going.

    Licensing

    This version of ZEVO has a per-computer license. See the ZEVO End User License Agreement for specific

    details.

    System Requirements

    Mac OS X: Snow Leopard (10.6.8), Lion (10.7.4) or Mountain Lion (10.8.1)

    A Mac with an Intel 64-bit processor running a 64-bit kernel

    4GB of RAM memory (8GB or more recommended)

    A dedicated internal/external hard disk

    Warnings

    To ensure that your data is protected and to prevent any loss of data, we strongly recommend that you keep at

    least two copies of your data at all times. Keep one copy using your ZEVO storage and a second copy on another

    storage medium, such as another hard disk or a cloud backup service.

    ZEVO uses the standard ZFS on-disk format (v28) and is therefore binary compatible with ZFS on other

    platforms. However, direct interchange with other platforms is not supported in this version.

    2

  • INSTALLING

    Installing ZEVO

    Installing ZEVO is simple just like installing any other system program or utility. It uses the standard Mac OS

    X Installer for installation. ZEVO is delivered as a disk image. The Install ZEVO icon opens the package

    installer which will guide you through the installation process. Double click on this icon or select it and use the

    File:Open menu (O). Since the installed software bundles live in protected system folders, you will need to provide an administrator password during the installation.

    Note: In some rare cases, the Installer can stall for a few minutes while the system is rebuilding its kernel extension cache. Please give it time if it appears to be making no progress.

    Uninstalling ZEVO

    ZEVO can be easily uninstalled using the Uninstall ZEVO button located in the ZEVO system preferences

    panel. Just select the button to perform the uninstall operation. Authorization is required and a reboot is

    recommended if you were actively using ZEVO prior to the uninstall.

    USING ZFS

    Command Line Interface

    This version of ZFS requires the use of the command line for configuration and administration of ZFS storage

    pools and file systems. The zpool(8) and zfs(8) man pages document the two commands used for configuration of

    ZFS on the Mac. In addition, the diskutil(8) tool can help identify disks.

    Disk Device Names

    Every disk device on Mac OS X has a unique runtime name and all these device names reside under the /dev/

    subdirectory. This directory is normally hidden by the Finder but it can easily be viewed in the Terminal. By

    convention, all disk device names start with disk followed by a number for example disk1. Some disk devices

    are partitioned into multiple slices, and in that case, the disk device name will also contain the slice number, as in

    disk1s2 (representing slice 2 of disk 1). The following listing shows the disk devices on the MacBook Pro used to

    write this document.

    3

  • $ ls -l /dev/disk*brw-r----- 1 root operator 14, 0 May 5 06:17 /dev/disk0brw-r----- 1 root operator 14, 2 May 5 06:17 /dev/disk0s1brw-r----- 1 root operator 14, 4 May 5 06:17 /dev/disk0s2brw-r----- 1 root operator 14, 1 May 5 06:17 /dev/disk1brw-r----- 1 root operator 14, 3 May 5 06:17 /dev/disk1s1brw-r----- 1 root operator 14, 5 May 5 06:17 /dev/disk1s2brw-r----- 1 root operator 14, 6 May 5 06:17 /dev/disk1s3

    In addition to the disk1 name, you will also notice a rdisk1. This second version of the name refers to the raw disk

    and is sometimes called the character device. The non-raw name is known as the block device. Disks can be

    accessed at both a block level and a byte level. For file system use, we will almost always be using the block device

    (e.g. the device names without any r prefix). Disk device names can also refer to virtual devices, like disk

    images, or in the case of ZFS, a proxy device that represents the logical storage pool that itself is likely comprised

    of many physical disks.

    One important difference with Mac OS X device names and other versions of Unix is that the device names on a

    Mac are only temporary. The names are assigned on a first-come, first-served manner. You can therefore never

    assume that the disk device name you are currently using or observing will be the same after the next restart or

    reattachment of that device. Think of it as a temporary seat assignment on the next flight, you will have a seat,

    but it may not be the same as the previous flight. ZFS needs to know where to look for the disks in the storage

    pools it manages. Therefore on Mac OS X, ZFS has an alternate naming mechanism that uses persistent device

    names which will not change over time. These names are based on the GPT UUID of the device and will always

    uniquely identify a zfs virtual device regardless of how it is attached to the system. These persistent alternate

    names are created on demand whenever a ZFS labeled device is discovered by the system.

    You can use the ls -l command to see the persistent names (and the temporary disk name that they refer to):

    $ ls -l /dev/dsk/lrwxr-xr-x 1 root wheel 12 Oct 17 01:04 GPTE_6A6490B2-DE02-42E7-8678-9AA -> /dev/disk0s2lrwxr-xr-x 1 root wheel 12 Oct 17 01:08 GPTE_B0A6CC2D-315C-4EF3-AB4A-3EA -> /dev/disk3s2

    If you have many disks attached to your system, it might be confusing to determine which name is associated with

    each disk. You can get a more detailed list of your devices using the diskutil(8) tool. The tool has a list option

    that will list all the disks and an info option that will give more detailed information about a specific disk. The list

    4

  • command will show the format type and size and, if applicable, the name of the file system on that device. The

    listings below demonstrate a sample list and info output.

    $ diskutil list/dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *200.0 GB disk0 1: EFI 209.7 MB disk0s1 2: ZFS 199.7 GB disk0s2/dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *121.3 GB disk1 1: EFI 209.7 MB disk1s1 2: Apple_HFS Koyuk 120.5 GB disk1s2 3: Apple_Boot Recovery HD 650.0 MB disk1s3

    $ diskutil info disk1s2 Device Identifier: disk1s2 Device Node: /dev/disk1s2 Part of Whole: disk1 Device / Media Name: Koyuk

    Volume Name: Koyuk Escaped with Unicode: Koyuk

    Mounted: Yes Mount Point: / Escaped with Unicode: /

    File System Personality: Journaled HFS+ Type (Bundle): hfs Name (User Visible): Mac OS Extended (Journaled) Journal: Journal size 16384 KB at offset 0x388000 Owners: Enabled

    Partition Type: Apple_HFS OS Can Be Installed: Yes Media Type: Generic Protocol: SATA SMART Status: Verified Volume UUID: EB5211C4-301C-31C7-88B0-C46E58349CBD

    Total Size: 120.5 GB (120473067520 Bytes) Volume Free Space: 93.6 GB (93611483136 Bytes) Device Block Size: 512 Bytes

    Read-Only Media: No Read-Only Volume: No Ejectable: No

    Whole: No Internal: Yes Solid State: Yes

    5

  • Creating ZFS Storage Pools

    ZFS is based on a storage pool model. To use ZFS with your disks, you first need to create a ZFS pool. This

    initial step requires identifying which disks to use in the pool. To assist in this identification effort, ZEVO has a

    showdisks command that will show disks that are likely candidates for use with ZFS. This command will filter

    inappropriate disk devices like the boot disk, disks hosting user home directories or disks that are part of a

    storage pool or RAID set. Heres an example of using the zpool showdisks command:

    $ zpool showdisks

    DISK DEVICE SIZE CONNECTION DESCRIPTION/dev/disk1 1.82TiB SATA WDC WD20EURS-63S48Y0 Media/dev/disk2 1.82TiB SATA WDC WD20EURS-63S48Y0 Media

    Once you have identified the disks that will comprise your storage pool you can use the zpool create command to

    create the pool. Heres an example of using the zpool create command to create a two disk mirror using the disks

    identified above:

    $ sudo zpool create -f my-new-pool mirror /dev/disk1 /dev/disk2

    While the command syntax is simple, there is a lot that happened behind the scenes. Each of the disks were

    automatically labeled with a standard GPT partition scheme (expected on the Mac and an industry standard). ZFS

    also added its own VDEV labels (which identify the disks as belonging to a ZFS pool). A default file system was

    also created in the pool. We can see the result of our creation using the zpool list command as follows:

    $ zpool listNAME SIZE ALLOC FREE CAP HEALTH ALTROOTmy-new-pool 1.82Ti 8.55Mi 1.82Ti 0% ONLINE -

    In addition to a simple summary listing, you can use the zpool status command to obtain additional details about

    your storage pool, such as the disk configuration and the health of the pool. Heres the status results from our

    example pool:

    $ zpool status my-new-pool pool: my-new-pool state: ONLINE scan: none requestedconfig:

    6

  • NAME STATE READ WRITE CKSUM my-new-pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 GPTE_13749DFB-EB6B-4960-8020-3D76B3 ONLINE 0 0 0 at disk1s2 GPTE_4EB64E61-7108-46E3-A983-BFA36F ONLINE 0 0 0 at disk2s2

    errors: No known data errors

    When creating a storage pool, ZFS will automatically create a default top-level file system in the pool. This

    default file system shares the same name as the pool. On Mac OS X, ZFS file systems will be mounted in the

    /Volumes directory by default. Following the above example, we can see our default file system using the zfs

    list command:

    $ zfs listNAME USED AVAIL REFER MOUNTPOINTmy-new-pool 940Ki 1.79Ti 592Ki /Volumes/my-new-pool

    RAID-Z2 Example

    Here we create a RAID-Z2 (double parity) pool using 6 disks. This pool can tolerate up to 2 disks failing at the

    same time. In this example we force the owner of the zfs file system to owner me, you may want to likewise set

    a non-root owner for your file system.

    $ sudo zpool showdisks

    DISK DEVICE SIZE CONNECTION DESCRIPTION/dev/disk1 1.82TiB SATA WDC WD20EURS-63S48Y0 Media/dev/disk2 1.82TiB SATA WDC WD20EURS-63S48Y0 Media/dev/disk3 1.82TiB SATA WDC WD20EURS-63S48Y0 Media/dev/disk4 1.82TiB SATA WDC WD20EURS-63S48Y0 Media/dev/disk5 1.82TiB SATA WDC WD20EURS-63S48Y0 Media/dev/disk6 1.82TiB SATA WDC WD20EURS-63S48Y0 Media

    $ sudo zpool create -f raider raidz2 /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4 /dev/disk5 /dev/disk6

    $ sudo chown me:staff /Volumes/raider

    7

  • $ zpool status raider pool: raider state: ONLINE scan: none requestedconfig:

    NAME STATE READ WRITE CKSUM

    raider ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 GPTE_6AA8E61A-63AE-4C82-8117-1CA41F ONLINE 0 0 0 at disk1s2 GPTE_203C7C41-2576-456E-AD1D-FE17AF ONLINE 0 0 0 at disk2s2 GPTE_9B68CBE7-5232-456A-907A-7120AB ONLINE 0 0 0 at disk3s2 GPTE_16485F0B-985D-4B78-AD1B-693764 ONLINE 0 0 0 at disk4s2 GPTE_165B209A-A829-462D-A297-60E51F ONLINE 0 0 0 at disk5s2 GPTE_93905919-5908-4137-9D31-C6EAA3 ONLINE 0 0 0 at disk6s2

    errors: No known data errors

    Converting a Single Disk Pool into a Mirror

    Here we turn a pool with a single disk into a mirror by attaching a new disk. The new disk should be of similar

    size and speed to the original disk in the pool.

    $ zpool status solo pool: solo state: ONLINE scan: none requestedconfig:

    NAME STATE READ WRITE CKSUM solo ONLINE 0 0 0 GPTE_3A7D177B-0C3E-46A2-8049-76D7E95BADFD ONLINE 0 0 0

    errors: No known data errors

    $ sudo zpool attach -f solo GPTE_3A7D177B-0C3E-46A2-8049-76D7E95BADFD /dev/disk6

    $ zpool status solo pool: solo

    8

  • state: ONLINE scan: resilvered 426Ki in 0h0m with 0 errors on Fri Sep 14 15:02:21 2012config:

    NAME STATE READ WRITE CKSUM

    solo ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 GPTE_3A7D177B-0C3E-46A2-8049-76D7E9 ONLINE 0 0 0 at disk5s2 GPTE_11003786-D792-4B8C-8432-8BA38D ONLINE 0 0 0 at disk6s2

    errors: No known data errors

    Mirrored Pair Example

    Here we create a pair of mirrors (aka RAID10) pool using 4 disks. This pool can tolerate a disk failure in each

    mirror side. By striping two mirrors we can achieve a performance gain for our disk throughput.

    $ sudo zpool showdisks

    DISK DEVICE SIZE CONNECTION DESCRIPTION/dev/disk1 3.75GiB USB Verbatim Store n Go Drive Media/dev/disk2 3.75GiB USB Verbatim Store n Go Drive Media/dev/disk3 3.75GiB USB Verbatim Store n Go Drive Media/dev/disk4 3.75GiB USB Verbatim Store n Go Drive Media

    $ sudo zpool create -f Mirror-Pair mirror /dev/disk1 /dev/disk2 mirror /dev/disk3 /dev/disk4

    $ zpool list Mirror-PairNAME SIZE ALLOC FREE CAP HEALTH ALTROOTMirror-Pair 6.81Gi 473Ki 6.81Gi 0% ONLINE -

    $ zpool status Mirror-Pair pool: Mirror-Pair state: ONLINE scan: none requestedconfig:

    NAME STATE READ WRITE CKSUM

    9

  • Mirror-Pair ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 GPTE_41ED410A-FC24-4A0A-BA16-B1987A ONLINE 0 0 0 at disk1s2 GPTE_6FD043CB-2F63-4C9F-8E78-004924 ONLINE 0 0 0 at disk2s2 mirror-1 ONLINE 0 0 0 GPTE_59FCD2C1-3183-4340-8AD0-213160 ONLINE 0 0 0 at disk3s2 GPTE_389FB535-721B-4F9C-B289-AEF275 ONLINE 0 0 0 at disk4s2

    errors: No known data errors

    Advanced Format Drives

    There are two types of Advanced Format drives: 512 emulation (512e) and 4K native (4Kn). The ZEVO

    implementation supports both. However, 512 emulation drives need to be identified by passing the ashift=12

    property when using the zpool create command.

    The 512e (emulated) hard drives always report 512 even though the actual physical sector size is 4Kib. This is

    problematic for a file system like ZFS which doesnt have static block sizes. Without intervention, ZFS would

    incorrectly assume that it could atomically and efficiently access data in units of 512 bytes. For additional info

    regarding Advanced Format see: http://en.wikipedia.org/wiki/Advanced_Format

    In addition to the 512e obfuscation, we have discovered that some USB and Firewire bridges also hide the true

    native sector size. The following command will set the ashift value to 12 as expected for a 4096 native sector size:

    $ sudo zpool create -o ashift=12 tank mirror /dev/disk2 /dev/disk4

    Backing Up

    ZEVO does not have a backup feature built in. It does, however, work with many of the popular backup

    programs available and we have used it successfully with online backups as well. If youre already using a backup

    application we encourage you to try it with ZEVO. If you encounter any incompatibilities please let us know.

    NOTE: ZEVO can be used as a Time Machine target (destination) with the usual caveats of using Time

    Machine. For example, Time Machine is not recommended if you have large amounts of data (over a terabyte),

    you want to back up lots of files in a single directory (e.g. your Mail), or you have large files that are changing

    (e.g. VM disk images). Time Machine cannot currently be used to backup ZEVO volumes.

    10

  • Logs and Notifications

    ZEVO will log important events to the kernel log and to the Growl

    Notification framework. As an additional diagnostic step, you can

    examine the kernel log file using the Console application.

    1. Launch the Console application2. Select New Log Window3. In the new window, select Show Log List4. Under the /private/var/log entry select kernel.log (system.log in 10.8)5. You can use the windows Filter to narrow the results using zfs as the search string

    You can also get UI notifications directly if you have Growl

    installed. With Growl you can receive simple, non-invasive

    notifications regarding your ZEVO storage.

    You can customize the notifications for each of the 4 ZEVO notification categories using the Growls

    preferences as shown below. The current notifications are:

    Physical Disk Errors and Failures

    Storage Pool Events (Create, Destroy, Import and Export)

    Storage Pool Health

    Storage Pool Scans (Scrubs and Resilvering)

    11

  • TROUBLESHOOTING TIPS

    Problem Recommendations

    My ZEVO disk isnt mounting Verify that the disk is available to the system using the System Profiler application. The drive should be listed under the corresponding Hardware section (i.e. FireWire, Serial-ATA, USB, Thunderbolt, etc).

    As a diagnostic step, you can attempt a manual mount using the following command in the Terminal application (substitute your real pool name):

    sudo zpool import -f -F your-pool-name

    Encountering a Disk was incorrectly disconnected alert

    Check your disk cable and power. Some bus-powered devices require more power than the bus can provide.

    How do I know if ZEVO was installed?

    Check if the ZEVO System Preferences is present. You can also verify that the ZFSFilesystem.kext file is present in /System/Library/Extensions/ folder.

    I cannot share my ZEVO volumes using AFP on Lion

    We recommend using SMB for personal file sharing on Lion an Mountain Lion as a work-around until a fix is available.

    My ZEVO checkup is taking a long time or appears stuck.

    In some cases, the checkup (zpool scrub) operation will slow down substantially if ZFS thinks other important operations are in progress. Sometimes it can linger in this state. You have the choice of canceling the scan or for an external disk, you can unmount it and then reattach it. On remount it will resume the checkup scan.

    My Custom Folder Icons are missing in the Finder

    Under Lion and an Mountain Lion, the Finder will sometimes not display a custom folder icon in the window view but show a generic folder icon instead. The custom icon does show correctly in the Quick Look view. We are investigating this and hope to have a solution.

    Installation Failed: The Installer encountered an error ...

    In the Finder, use the Go to Folder... menu to open the /var/log folder. Double click on the install.log file. Sometimes there is a log entry at the end of the file that explains the cause of the failure.

    12

  • ACKNOWLEDGEMENTSZEVO is made possible by the open-source version of ZFS. We would like to thank the many ZFS developers

    who have helped develop the ZFS technology.

    Mac and Mac OS X are trademarks of Apple Inc.ZFS is a registered trademark of Oracle and/or its affiliates.

    Other trademarks are the property of their respective owners.

    Copyright 2012 GreenBytes, Inc.. All rights reserved.

    No part of this document may be reproduced without the prior written consent of GreenBytes.

    13