/*****************************************************************************/ /* Document : How to get unix files back, after a crash or accidently */ /* deleting them, and some other filesystem errors */ /* Doc. Version : 5 */ /* File : undelete_unix.txt */ /* Purpose : some examples for the Oracle, DB2, SQLServer DBA */ /* Date : 24-12-2008 */ /* Compiled by : Albert van der Sel */ /* Comment : This describes a few situations where you do not have */ /* regular backups available. This note does not pretent */ /* to be anything more than just a handfull of pointers. */ /*****************************************************************************/ IMPORTANT NOTICE: >>> This document contains some selected theads from the Internet. <<< >>> It just contains some "pointers" in case you have a file or fs problem. <<< >>> Do NOT regard the information as being "directly usable" in any sense! <<< >>> Its only ment as a possible pointer, or hint, <<< >>> on which you may investigate further. <<< >>> Also, it's vital to understand that on the subject of "undelete", <<< >>> this document ONLY contains some pointers on that subject. <<< >>> It does not pretent to be any more than that. <<< Contents: --------- 1. Some Filesystem errors 2. How to delete "weird" files 3. Some possible hints on howto "undelete" files (if no backups are available) 4. Some other stuff For some pointers on the subject of "undelete", you might want to jump to section 3 right away. ############################################################### 1. Some Filesystem errors: ############################################################### ---------------------------------------------------------------------------------------- Note 1.1 : Possible way how to save files from A corrupt directory Works on OS : all unix probable message : ksh: Invalid file system control data detected: ---------------------------------------------------------------------------------------- >>>> Question: Anybody recognize this? This directory seems to be missing the ".", I can't umount, can't remove the directory, can't copy a good directory over it, etc. spiderman# cd probes spiderman# pwd /opt/diagnostics/probes spiderman# ls -la ls: 0653-341 The file . does not exist. spiderman# cd .. spiderman# ls -la probes ls: probes: Invalid file system control data detected. total 0 spiderman# spiderman# fuser /opt /opt: spiderman# umount /opt umount: 0506-349 Cannot unmount /dev/hd10opt: The requested resource is busy. spiderman# umount /dev/hd10opt umount: 0506-349 Cannot unmount /dev/hd10opt: The requested resource is busy. spiderman# fsck /opt ** Checking /dev/hd10opt (/opt) MOUNTED FILE SYSTEM; WRITING SUPPRESSED; Checking a mounted filesystem does not produce dependable results. ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames DIRECTORY CORRUPTED (NOT FIXED) DIRECTORY CORRUPTED (NOT FIXED) Directory /diagnostics/probes, '.' entry is missing. (NOT FIXED) Directory /diagnostics/probes, '..' entry is missing. (NOT FIXED) ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts link count directory I@98 owner=bin mode$0755 sizeQ2 mtime=May 13 14:54 2005 count 3 should be 2 (NOT ADJUSTED) link count directory I@99 owner=bin mode$0755 size24 mtime=Jan 10 13:45 2005 count 2 should be 1 (NOT ADJUSTED) Unreferenced file IA06 owner=bin mode0555 sizee56 mtime=Jul 07 14:25 2004 (NOT RECONNECTED) Unreferenced file IA06 (NOT CLEARED) Unreferenced file IA07 owner=bin mode0555 size)12 mtime=Jul 07 14:25 2004 (NOT RECONNECTED) etc.... >>>> Answer: Some good news here. Yes, your directory is hosed, but the important things is that all a directory is a repository for storing inode numbers and associated (human readable) file names. Since fsck is so nicely generating all of those now currently inaccessible inode numbers, a find command can be used to move them into a new directory. Once the old directory is empty, you can (hopefully) rm -r it. Here's what you need to do. a) Get all the inode numbers generated from your fsck b) put them into a variable (e.g. lost_inodes="4099 4106....etc." c) Make a target directory for the lost inodes to be moved into: mkdir /tmp/recovery d) cd into your problem File System: cd /opt d) Run a loop using find: for i in ${lost_inodes} do find . -inum ${i} mv * /tmp/recovery \; echo "Moved and recovered inode # ${i}" done That should do it. Let me know if it works ok! BTW, the new "file name" should be the inode number of the file. You will have to rename the files as needed. Note that this mehod saved the files from the corrupt directory. ---------------------------------------------------------------------------------------- Note 1.2 : A superblock issue Works on OS : all unix probable message : probably fsck gives you a message disks : local disks, most likely not SAN ---------------------------------------------------------------------------------------- >>>> Method 1: Use this command in case the superblock is corrupted. This will restore the BACKUP COPY of the superblock to the CURRENT copy. # dd count=1 bs=4k skip=31 seek=1 if=/dev/hd4 of=/dev/hd4 (hd4 is an example) # fsck /dev/hd4 2>&1 | tee /tmp/fsck.errors OR >>>>> Method 2: If you have a dirty superblock you might try to do “fsck”. If this does not work try the following (This procedure does not promise 100% success). (The following example relats to a bad filesystem in slv4.0) 1. Copy the original Superblock into a file (calld sd0 in /tmp - places can be changed): dd if=/dev/rslv4.0 of=/tmp/sb0 bs=4k count=1 skip=1 Note: if=Input File, of=Output file, bs=Block Size. 2. Copy the backup Superblock into a file (calld sd1 in /tmp - places can be changed): dd if=/dev/rslv4.0 of=/tmp/sb1 bs=4k count=1 skip=31 3. Copy the Backup Superblock file over the original Superblock: dd if=/tmp/sb1 of=/dev/rslv4.0 bs=4k seek=1 4. Do “fsck” again on this filesystem Note: If you want to restore the original Superblock, do: dd if=/tmp/sb0 of=/dev/rslv4.0 bs=4k seek=1 ---------------------------------------------------------------------------------------- Note 1.3 : A superblock issue Works on OS : AIX probable message : probably fsck gives you a message disks : local disks, most likely not SAN ---------------------------------------------------------------------------------------- >>>> Method 1: -- Fixing a corrupted magic number in the file system superblock. If the superblock of a file system is damaged, the file system cannot be accessed. You can fix a corrupted magic number in the file system superblock. Most damage to the superblock cannot be repaired. The following procedure describes how to repair a superblock in a JFS file system when the problem is caused by a corrupted magic number. If the primary superblock is corrupted in a JFS2 file system, use the fsck command to automatically copy the secondary superblock and repair the primary superblock. In the following scenario, assume /home/myfs is a JFS file system on the physical volume /dev/lv02. The information in this how-to was tested using AIX® 5.2. If you are using a different version or level of AIX, the results you obtain might vary significantly. 1. Unmount the /home/myfs file system, which you suspect might be damaged, using the following command: # umount /home/myfs 2. To confirm damage to the file system, run the fsck command against the file system. For example: # fsck -p /dev/lv02 If the problem is damage to the superblock, the fsck command returns one of the following messages: fsck: Not an AIXV5 file system OR Not a recognized filesystem type 3. With root authority, use the od command to display the superblock for the file system, as shown in the following example: # od -x -N 64 /dev/lv02 +0x1000 Where the -x flag displays output in hexadecimal format and the -N flag instructs the system to format no more than 64 input bytes from the offset parameter (+), which specifies the point in the file where the file output begins. The following is an example output: 0001000 1234 0234 0000 0000 0000 4000 0000 000a 0001010 0001 8000 1000 0000 2f6c 7633 0000 6c76 0001020 3300 0000 000a 0003 0100 0000 2f28 0383 0001030 0000 0001 0000 0200 0000 2000 0000 0000 0001040 In the preceding output, note the corrupted magic value at 0x1000 (1234 0234). If all defaults were taken when the file system was created, the magic number should be 0x43218765. If any defaults were overridden, the magic number should be 0x65872143. 4. Use the od command to check the secondary superblock for a correct magic number. An example command and its output follows: # od -x -N 64 /dev/lv02 +0x1f000 001f000 6587 2143 0000 0000 0000 4000 0000 000a 001f010 0001 8000 1000 0000 2f6c 7633 0000 6c76 001f020 3300 0000 000a 0003 0100 0000 2f28 0383 001f030 0000 0001 0000 0200 0000 2000 0000 0000 001f040 Note the correct magic value at 0x1f000. 5. Copy the secondary superblock to the primary superblock. An example command and output follows: # dd count=1 bs=4k skip=31 seek=1 if=/dev/lv02 of=/dev/lv02 dd: 1+0 records in. dd: 1+0 records out. Use the fsck command to clean up inconsistent files caused by using the secondary superblock. For example: # fsck /dev/lv02 2>&1 | tee /tmp/fsck.errs For more information The fsck and od command descriptions in AIX 5L Version 5.3 Commands Reference, Volume 4 AIX Logical Volume Manager from A to Z: Introduction and Concepts, an IBM Redbook AIX Logical Volume Manager from A to Z: Troubleshooting and Commands, an IBM Redbook "Boot Problems" in Problem Solving and Troubleshooting in AIX 5L, an IBM Redbook OR >>>>> Method 2: If you experience a dirty superblock, which causes a filesystem to be not mountable, you can use backup copy of superblock to copy it over the corrupted one. With little unix experience it can be a tough task, because the steps required are as follows: - boot from bootable media (install cd/tape, mksysb tape) - access rootvg before mounting fs - fsck -y on corrupted fs's - logform on logdevice - dd count=1 bs=4k skip=31 seek=1 if=/dev/ of=/dev/ ---------------------------------------------------------------------------------------- Note 1.3 : A superblock issue Works on OS : Solaris probable message : probably fsck gives you a message disks : local disks, most likely not SAN ---------------------------------------------------------------------------------------- >>>> Method 1: Boot from OK prompt to single user mode, for example from CDROM OK boot cdrom -sw Attempt to fsck(1M) boot disk. This could fail with a super block error. # fsck /dev/rdsk/device Find the locations of alternate super blocks. BE SURE TO USE AN UPPERCASE -N. For example: # newfs -N /dev/rdsk/c0t0d0s0 /dev/rdsk/c0t0d0s0: 2048960 sectors in 1348 cylinders of 19 tracks, 80 sectors 1000.5MB in 85 cyl groups (16 c/g, 11.88MB/g, 5696 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 24432, 48832, 73232, 97632, 122032, 146432, 170832, 195232, 219632, 244032, 268432, 292832, 317232, 341632, 366032, 390432, 414832, 439232, 463632, 488032, 512432, 536832, 561232, 585632, 610032, 634432, 658832, 683232, 707632, 732032, 756432, 778272, 802672, 827072, 851472, 875872, 900272, 924672, 949072, 973472, 997872, 1022272, 1290672, ... Using an alternate super block, run fsck(1M) on the disk. You might have to try more than one alternate super block to make this to work. Pick a couple from the beginning, the middle, and the end. # fsck -o b= /dev/rdsk/c0t0d0s0 The boot block is probably bad too. Restore it while you are booted from the CD-ROM. # /usr/sbin/installboot /usr/platform/architecture/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0 Reboot the operating environment. # reboot OR: >>>>> Method 2: #newfs -N /dev/rdsk/ (like c0t0d0s7) it will generate the identical superblock. then run....... #fsck -o b=535952 /dev/rdsk/ (like c0t0d0s7) OR: >>>>>>> Method 3: Restore a Bad Superblock (Solaris 8,9 and 10) February 25, 2008 by sun4u Become superuser or assume an equivalent role. Determine whether the bad superblock is in the root (/), /usr, or /var file system and select one of the following: If the bad superblock is in either the root (/), /usr, or /var file system, then boot from the network or a locally connected CD. From a locally-connected CD, use the following command: ok boot cdrom -s From the network where a boot or install server is already setup, use the following command: ok boot net -s If the bad superblock is not in either the root (/), /usr, /var file system, change to a directory outside the damaged file system and unmount the file system. # umount /mount-point Caution – Be sure to use the newfs -N in the next step. If you omit the -N option, you will destroy all of the data in the file system and replace it with an empty file system. Display the superblock values by using the newfs -N command. # newfs -N /dev/rdsk/device-name Provide an alternate superblock by using the fsck command. # fsck-F ufs -o b=block-number /dev/rdsk/device-name The fsck command uses the alternate superblock you specify to restore the primary superblock. You can always try 32 as an alternate block. Or, use any of the alternate blocks shown by the newfs -N command. Restoring a Bad Superblock (Solaris 8, 9, and 10 Releases) The following example shows how to restore the superblock copy 5264. # newfs -N /dev/rdsk/c0t3d0s7 /dev/rdsk/c0t3d0s7: 163944 sectors in 506 cylinders of 9 tracks, 36 sectors 83.9MB in 32 cyl groups (16 c/g, 2.65MB/g, 1216 i/g) super-block backups (for fsck -b #) at: 32, 5264, 10496, 15728, 20960, 26192, 31424, 36656, 41888, 47120, 52352, 57584, 62816, 68048, 73280, 78512, 82976, 88208, 93440, 98672, 103904, 109136, 114368, 119600, 124832, 130064, 135296, 140528, 145760, 150992, 156224, 161456, # fsck-F ufs -o b=5264 /dev/rdsk/c0t3d0s7 Alternate superblock location: 5264. ** /dev/rdsk/c0t3d0s7 ** Last Mounted on ** Phase 1- Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups 36 files, 867 used, 75712 free (16 frags, 9462 blocks, 0.0% fragmentation) ***** FILE SYSTEM WAS MODIFIED ***** # ---------------------------------------------------------------------------------------- Note 1.4 : A superblock issue Works on OS : Linux ext2 filesystem probable message : probably fsck gives you a message disks : local disks, most likely not SAN ---------------------------------------------------------------------------------------- DAMAGED SUPERBLOCK If a filesystem check fails and returns the error message “Damaged Superblock” you're lost . . . . . . . or not ? Well, not really, the damaged ¨superblock¨ can be restored from a backup. There are several backups stored on the harddisk. But let me first have a go at explaining what a “superblock”is. A superblock is located at position 0 of every partition, contains vital information about the filesystem and is needed at a filesystem check. The information stored in the superblock are about what sort of fiesystem is used, the I-Node counts, block counts, free blocks and I-Nodes, the numer of times the filesystem was mounted, date of the last filesystem check and the first I-Node where / is located. Thus, a damaged superblock means that the filesystem check will fail. Our luck is that there are backups of the superblock located on several positions and we can restore them with a simple command. The usual ( and only ) positions are: 8193, 32768, 98304, 163840, 229376 and 294912. ( 8193 in many cases only on older systems, 32768 is the most current position for the first backup ) You can check this out and have a lot more info about a particular partition you have on your HD by: # dumpe2fs /dev/hda5 You will see that the primary superblock is located at position 0, and the first backup on position 32768. O.K. let´s get serious now, suppose you get a ¨Damaged Superblock¨ error message at filesystem check ( after a power failure ) and you get a root-prompt in a recovery console, then you give the command: # e2fsck -b 32768 /dev/hda5 don´t try this on a mounted filesystem It will then check the filesystem with the information stored in that backup superblock and if the check was successful it will restore the backup to position 0. Now imagine the backup at position 32768 was damaged too . . . then you just try again with the backup stored at position 98304, and 163840, and 229376 etc. etc. until you find an undamaged backup ( there are five backups so if at least one of those five is okay it´s bingo ! ) So next time don´t panic . . just get the paper where you printed out this Tip and give the magic command # e2fsck -b 32768 /dev/hda5 ---------------------------------------------------------------------------------------- Note 1.5 : Root filesystem full or nearly full Works on OS : most unixes ---------------------------------------------------------------------------------------- Always take care that the "/" root filesystem does not get near 100% full. Potential problems 1. Some systems will not boot anymore in the normal multi-user way 2. On many systems new logons are not possible anymore 3. Some apps write or create unamed pipes "somewhere" in the root fs: they may stall or even crash Remarks on 2: This is caused by a full file system and the system has no space to write its utmpx (login info) entry. To get around this condition the system must be booted up into single user mode, or you may need to boot from CDROM or from network etc.. Then you might be able to clear logfiles under /var/.. Or just increase the / filesystem with some additional space. ############################################################### 2. How to delete "weird" files ############################################################### ---------------------------------------------------------------------------------------- Note 2.1 : You cannot rm a file in the "normal" way, or How to Delete or Remove Files With Inode Number Works on OS : all unix ---------------------------------------------------------------------------------------- >>>>>> Question: How can I remove a bizarre, irremovable file from a directory? I've tried every way of using /bin/rm and nothing works." >>>>>> Answer: In some rare cases a strangely-named file will show itself in your directory and appear to be un-removable with the rm command. Here is will the use of ls -li and find with its -inum [inode] primary does the job. Let's say that ls -l shows your irremovable as -rw------- 1 smith smith 0 Feb 1 09:22 ?*?*P Type: ls -li to get the index node, or inode. 153805 -rw------- 1 smith smith 0 Feb 1 09:22 ?*?^P The inode for this file is 153805. Use find -inum [inode] to make sure that the file is correctly identified. % find -inum 153805 -print ./?*?*P Here, we see that it is. Then used the -exec functionality to do the remove. . % find . -inum 153805 -print -exec /bin/rm {} \; Note that if this strangely named file were not of zero-length, it might contain accidentally misplaced and wanted data. Then you might want to determine what kind of data the file contains and move the file to some temporary directory for further investigation, for example: % find . -inum 153805 -print -exec /bin/mv {} unknown.file \; Will rename the file to unknown.file, so you can easily inspect it. Another way to remove strangely-named files is to use "ls -q" or "cat -v" to show the special characters, and then use shell's globbing mechanism to delete the file. $ ls -????*'? $ ls | cat -v -^B^C?^?*' $ rm ./-'^B'* -- achieved by typing control-V control-B $ ls the argument given to rm is a judicious selection of glob wildcards (*'s) and sufficient control characters to uniquely identify the file. The leading "./" is useful when the file begins with a hyphen. These binary name files are caused by: * accidental cut-and-pastes to shell prompts - especially when you paste something of the form: "junk > garbage" because the shell creates the file "garbage" before trying to execute the command "junk" * filesystem corruption (in which case touching the filesystem any more can really stuff things up) If you discover that you have two files of the same name, one of the files probably has a bizarre (and unprintable) character in its name. Most probably, this unprintable character is a backspace. For example: $ ls filename filename $ ls -q filename fl?ilename $ ls | cat -v filename fl^Hilename ---------------------------------------------------------------------------------------- Note 2.2 : You cannot rm a file in the "normal" way, or How to Delete or Remove Files With Inode Number Works on OS : all unix Same problem as noted in note 2.1. ---------------------------------------------------------------------------------------- An inode identifies the file and its attributes such as file size, owner, and so on. A unique inode number within the file system identifies each inode. But, why to delete file by an inode number? Sure, you can use rm command to delete file. Sometime accidentally you creates filename with control characters or characters which are unable to be input on a keyboard or special character such as ?, * ^ etc. Removing such special character filenames can be problem. Use following method to delete a file with strange characters in its name: Please note that the procedure outlined below works with Solaris, FreeBSD, Linux, or any other Unixish oses out there: Find out file inode First find out file inode number with any one of the following command: stat {file-name} OR ls -il {file-name} Use find command to remove file: Use find command as follows to find and remove a file: find . -inum [inode-number] -exec rm -i {} \; When prompted for confirmation, press Y to confirm removal of the file. Let us try to delete file using inode number. (a) Create a hard to delete file name: $ cd /tmp $ touch "\+Xy \+\8" $ ls (b) Try to remove this file with rm command: $ rm \+Xy \+\8 (c) Remove file by an inode number, but first find out the file inode number: $ ls -ilOutput: 781956 drwx------ 3 viv viv 4096 2006-01-27 15:05 gconfd-viv 781964 drwx------ 2 viv viv 4096 2006-01-27 15:05 keyring-pKracm 782049 srwxr-xr-x 1 viv viv 0 2006-01-27 15:05 mapping-viv 781939 drwx------ 2 viv viv 4096 2006-01-27 15:31 orbit-viv 781922 drwx------ 2 viv viv 4096 2006-01-27 15:05 ssh-cnaOtj4013 781882 drwx------ 2 viv viv 4096 2006-01-27 15:05 ssh-SsCkUW4013 782263 -rw-r--r-- 1 viv viv 0 2006-01-27 15:49 \+Xy \+\8Note: 782263 is inode number. (d) Use find command to delete file by inode: Find and remove file using find command, type the command as follows: $ find . -inum 782263 -exec rm -i {} \; Note you can also use add \ character before special character in filename to remove it directly so the command would be: $ rm "\+Xy \+\8" If you have file like name like name "2005/12/31" then no UNIX or Linux command can delete this file by name. Only method to delete such file is delete file by an inode number. Linux or UNIX never allows creating filename like 2005/12/31 but if you are using NFS from MAC OS or Windows then it is possible to create a such file. OR read this thead: Become superuser or assume an equivalent role. Determine whether the bad superblock is in the root (/), /usr, or /var file system and select one of the following: If the bad superblock is in either the root (/), /usr, or /var file system, then boot from the network or a locally connected CD. From a locally-connected CD, use the following command: ok boot cdrom -s From the network where a boot or install server is already setup, use the following command: ok boot net -s If the bad superblock is not in either the root (/), /usr, /var file system, change to a directory outside the damaged file system and unmount the file system. # umount /mount-point Caution – Be sure to use the newfs -N in the next step. If you omit the -N option, you will destroy all of the data in the file system and replace it with an empty file system. Display the superblock values by using the newfs -N command. # newfs -N /dev/rdsk/device-name Provide an alternate superblock by using the fsck command. # fsck-F ufs -o b=block-number /dev/rdsk/device-name The fsck command uses the alternate superblock you specify to restore the primary superblock. You can always try 32 as an alternate block. Or, use any of the alternate blocks shown by the newfs -N command. Restoring a Bad Superblock (Solaris 8, 9, and 10 Releases) The following example shows how to restore the superblock copy 5264. # newfs -N /dev/rdsk/c0t3d0s7 /dev/rdsk/c0t3d0s7: 163944 sectors in 506 cylinders of 9 tracks, 36 sectors 83.9MB in 32 cyl groups (16 c/g, 2.65MB/g, 1216 i/g) super-block backups (for fsck -b #) at: 32, 5264, 10496, 15728, 20960, 26192, 31424, 36656, 41888, 47120, 52352, 57584, 62816, 68048, 73280, 78512, 82976, 88208, 93440, 98672, 103904, 109136, 114368, 119600, 124832, 130064, 135296, 140528, 145760, 150992, 156224, 161456, # fsck-F ufs -o b=5264 /dev/rdsk/c0t3d0s7 Alternate superblock location: 5264. ** /dev/rdsk/c0t3d0s7 ** Last Mounted on ** Phase 1- Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups 36 files, 867 used, 75712 free (16 frags, 9462 blocks, 0.0% fragmentation) ***** FILE SYSTEM WAS MODIFIED ***** # ############################################################### 3. UNDELETE OF FILES IF NO BACKUPS ARE AVAILABLE: ############################################################### Few things are so lousy, as loosing an important file. Ofcourse, all sysdamins have professional backup software running on their systems. But in some rare cases, for whatever reason, a backup might not be available. In such a situation it *might* still be possible to recover files after you have accidently deleted them. In general however, there is no more than a pessimistic prognose for file undelete. When a file is deleted using the “rm” command, three actions occur. First, the filename and pointer are removed from its directory block. Second, the kernel frees up file's data blocks for general use. Third, the kernel frees up the file's indexing record, or inode, for general use. Thus, quite litteraly, the file is effectively destroyed from the operating system's standpoint. But, it's not "gone" yet ! If you act quickly, you might salvage the file. Some unixes provides for a sort of "unrm" or "undelete" (shell) utility, suitable for some types of filesystems, which may produce good results if you start using it immediately after you mistakingly deleted the file. But it's likely that you still need to do a lot of work after using that "unrm" tool, like processing the results with "Lazarus" or similar tool. Any case, its still worth to check with your sysadmin or check your system. Also, in general, if an important file was deleted by mistake, (try to) stop all write activity on that filesystem. Maybe, this section provides you with a pointer on how to move on. Also, there might be "tools" out there that can help the user with such a problem. Here are some notes on the subject of undelete on Unix. ---------------------------------------------------------------------------------------- Note 1: ---------------------------------------------------------------------------------------- http://www.cyberciti.biz/tips/linuxunix-recover-deleted-files.html Using grep (traditional UNIX way) to recover files Use following grep syntax: grep -b 'search-text' /dev/partition > file.txt OR grep -a -B[size before] -A[size after] 'text' /dev/[your_partition] > file.txt Where, -i : Ignore case distinctions in both the PATTERN and the input files i.e. match both uppercase and lowercase character. -a : Process a binary file as if it were text -B Print number lines/size of leading context before matching lines. -A: Print number lines/size of trailing context after matching lines. To recover text file starting with "nixCraft" word on /dev/sda1 you can try following command: # grep -i -a -B10 -A100 'nixCraft' /dev/sda1 > file.txt Next use vi to see file.txt. This method is ONLY useful if deleted file is text file. If you are using ext2 file system, try out recover command. . ---------------------------------------------------------------------------------------- Note 2: ---------------------------------------------------------------------------------------- Bring back deleted files with lsof By Michael Stutz on November 16, 2006 (8:00:00 AM) Briefly, a file as it appears somewhere on a Linux filesystem is actually just a link to an inode, which contains all of the file's properties, such as permissions and ownership, as well as the addresses of the data blocks where the file's content is stored on disk. When you rm a file, you're removing the link that points to its inode, but not the inode itself; other processes (such as your audio player) might still have it open. It's only after they're through and all links are removed that an inode and the data blocks it pointed to are made available for writing. This delay is your key to a quick and happy recovery: if a process still has the file open, the data's there somewhere, even though according to the directory listing the file already appears to be gone. This is where the Linux process pseudo-filesystem, the /proc directory, comes into play. Every process on the system has a directory here with its name on it, inside of which lies many things -- including an fd ("file descriptor") subdirectory containing links to all files that the process has open. Even if a file has been removed from the filesystem, a copy of the data will be right here: /proc/process id/fd/file descriptor To know where to go, you need to get the id of the process that has the file open, and the file descriptor. These you get with lsof, whose name means "list open files." (It actually does a whole lot more than this and is so useful that almost every system has it installed. If yours isn't one of them, you can grab the latest version straight from its author.) Once you get that information from lsof, you can just copy the data out of /proc and call it a day. This whole thing is best demonstrated with a live example. First, create a text file that you can delete and then bring back: $ man lsof | col -b > myfile Then have a look at the contents of the file that you just created: $ less myfile You should see a plaintext version of lsof's huge man page looking out at you, courtesy of less. Now press Ctrl-Z to suspend less. Back at a shell prompt make sure your file is still there: $ ls -l myfile -rw-r--r-- 1 jimbo jimbo 114383 Oct 31 16:14 myfile $ stat myfile File: `myfile' Size: 114383 Blocks: 232 IO Block: 4096 regular file Device: 341h/833d Inode: 1276722 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1010/ jimbo) Gid: ( 1010/ jimbo) Access: 2006-10-31 16:15:08.423715488 -0400 Modify: 2006-10-31 16:14:52.684417746 -0400 Change: 2006-10-31 16:14:52.684417746 -0400 Yup, it's there all right. OK, go ahead and oops it: $ rm myfile $ ls -l myfile ls: myfile: No such file or directory $ stat myfile stat: cannot stat `myfile': No such file or directory $ It's gone. At this point, you must not allow the process still using the file to exit, because once that happens, the file will really be gone and your troubles will intensify. Your background less process in this walkthrough isn't going anywhere (unless you kill the process or exit the shell), but if this were a video or sound file that you were playing, the first thing to do at the point where you realize you deleted the file would be to immediately pause the application playback, or otherwise freeze the process, so that it doesn't eventually stop playing the file and exit. Now to bring the file back. First see what lsof has to say about it: $ lsof | grep myfile less 4158 jimbo 4r REG 3,65 114383 1276722 /home/jimbo/myfile (deleted) The first column gives you the name of the command associated with the process, the second column is the process id, and the number in the fourth column is the file descriptor (the "r" means that it's a regular file). Now you know that process 4158 still has the file open, and you know the file descriptor, 4. That's everything you have to know to copy it out of /proc. You might think that using the -a flag with cp is the right thing to do here, since you're restoring the file -- but it's actually important that you don't do that. Otherwise, instead of copying the literal data contained in the file, you'll be copying a now-broken symbolic link to the file as it once was listed in its original directory: $ ls -l /proc/4158/fd/4 lr-x------ 1 jimbo jimbo 64 Oct 31 16:18 /proc/4158/fd/4 -> /home/jimbo/myfile (deleted) $ cp -a /proc/4158/fd/4 myfile.wrong $ ls -l myfile.wrong lrwxr-xr-x 1 jimbo jimbo 24 Oct 31 16:22 myfile.wrong -> /home/jimbo/myfile (deleted) $ file myfile.wrong myfile.wrong: broken symbolic link to `/home/jimbo/myfile (deleted)' $ file /proc/4158/fd/4 /proc/4158/fd/4: broken symbolic link to `/home/jimbo/myfile (deleted)' So instead of all that, just a plain old cp will do the trick: $ cp /proc/4158/fd/4 myfile.saved And finally, verify that you've done good: $ ls -l myfile.saved -rw-r--r-- 1 jimbo jimbo 114383 Oct 31 16:25 myfile.saved $ man lsof | col -b > myfile.new $ cmp myfile.saved myfile.new No complaints from cmp -- your restoration is the real deal. Incidentally, there are a lot of useful things you can do with lsof in addition to rescuing lost files. ---------------------------------------------------------------------------------------- Note 3: ---------------------------------------------------------------------------------------- Recover Deleted Files Files on Unix may be deleted, but still held open by another process. While most Unix would require a utility to read a file by the filesystem and inode(5) number, the special /proc filesystem on Linux allows the recovery of deleted but held open files: Use lsof(1) to discover the deleted file, and record the Process ID (PID) and File Descriptor (FD) open to this file. Recover the file: cp /proc/$PID/fd/$FD /var/tmp/recovered The deleted file should appear as a broken symbolic link under the /proc/$PID/fd directory. Despite this, /proc still allows the file to be copied elsewhere. For related information, see how to debug Unix systems. ---------------------------------------------------------------------------------------- Note 4: ---------------------------------------------------------------------------------------- HOWTO recover deleted files on an Linux ext3 file system Please see: http://www.xs4all.nl/~carlo17/howto/undelete_ext3.html Or see Tom Pycke, Recovering Files in Linux, available at www.recover.source.net/linux For Linux ext2 file system: 1. R-Linux undelete utility: Take a look here: http://3d2f.com/tags/undelete/recover/unix/ 2. The ext2 file system has an addon program called e2undel[1] which allows file undeletion, although the similar ext3 file system does not support that kind of undeletion. 3. Also, mabe the following "unrm" can be of help on Linux: http://freshmeat.net/projects/unrm/ Another "unrm" pointer: http://staff.washington.edu/dittrich/talks/blackhat/tct/man/man1/unrm.1.html ---------------------------------------------------------------------------------------- Note 5: ---------------------------------------------------------------------------------------- Possible AIX undelete tool: http://www.compunix.com/products.html http://www.compunix.com/prod/analyse.html http://www.compunix.com/eval/list.html For AIX and JFS: http://www.phase2.net/2008/03/04/aix-recovering-a-deleted-file-undelete/ When you are really good with the fsdb tool (included in AIX), you might be able to recover files yourself. See another note in this document for an example of using fsdb. See man page for fsdb or http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds2/fsdb.htm ---------------------------------------------------------------------------------------- Note 6: ---------------------------------------------------------------------------------------- 1. Solaris Recovery: -- Kernel Recovery for Solaris Sparc Kernel Recovery for Solaris Sparc is a do-it-yourself data recovery software. Software performs read-only scan, which helps you to recover your important data in minutes. File System supported for recovery is UFS File system. http://www.download.com/Kernel-Recovery-for-Solaris-Sparc/3000-2248_4-10578170.html http://www.download3k.com/Press-Launch-of-Kernel-Recovery-for-Solaris-SPARC.html http://www.tucows.com/preview/505583 http://www.programurl.com/kernel-recovery-for-solaris-sparc.htm Nucleus Technologies.com: http://www.nucleustechnologies.com -- Other Solaris Data Recovery Software: http://solaris-data-recovery-software.qarchive.org/ 2. R-Tools technology: Undelete tool for Linux and Solaris: http://www.data-recovery-software.net/ ---------------------------------------------------------------------------------------- Note 7: ---------------------------------------------------------------------------------------- For AIX and JFS filesystem: an undelete program Not tested by writer of this document: /***************************************************************************** * rsb.c - Read Super Block. Allows a jfs superblock to be dumped, inode * table to be listed or specific inodes data pointers to be chased and * dumped to standard out (undelete). * * Phil Gibbs - Trinem Consulting (pgibbs@trinem.co.uk) ****************************************************************************/ #include #include #include #include #include #include #include #include #define FOUR_MB (1024*1024*4) #define THIRTY_TWO_KB (1024*32) extern int optind; extern int Optopt; extern int Opterr; extern char *optarg; void PrintSep() { int k=80; while (k) { putchar('-'); k--; } putchar('\n'); } char *UserName(uid_t uid) { char replystr[10]; struct passwd *res; res=getpwuid(uid); if (res->pw_name[0]) { return res->pw_name; } else { sprintf(replystr,"%d",uid); return replystr; } } char *GroupName(gid_t gid) { struct group *res; res=getgrgid(gid); return res->gr_name; } ulong NumberOfInodes(struct superblock *sb) { ulong MaxInodes; ulong TotalFrags; if (sb->s_version==fsv3pvers) { TotalFrags=(sb->s_fsize*512)/sb->s_fragsize; MaxInodes=(TotalFrags/sb->s_agsize)*sb->s_iagsize; } else { MaxInodes=(sb->s_fsize*512)/sb->s_bsize; } return MaxInodes; } void AnalyseSuperBlock(struct superblock *sb) { ulong TotalFrags; PrintSep(); printf("SuperBlock Details:\n-------------------\n"); printf("File system size: %ld x 512 bytes (%ld Mb)\n", sb->s_fsize, (sb->s_fsize*512)/(1024*1024)); printf("Block size: %d bytes\n",sb->s_bsize); printf("Flags: "); switch (sb->s_fmod) { case (char)FM_CLEAN: break; case (char)FM_MOUNT: printf("mounted "); break; case (char)FM_MDIRTY: printf("mounted dirty "); break; case (char)FM_LOGREDO: printf("log redo failed "); break; default: printf("Unknown flag "); break; } if (sb->s_ronly) printf("(read-only)"); printf("\n"); printf("Last SB update at: %s",ctime(&(sb->s_time))); printf("Version: %s\n", sb->s_version?"1 - fsv3pvers":"0 - fsv3vers"); printf("\n"); if (sb->s_version==fsv3pvers) { TotalFrags=(sb->s_fsize*512)/sb->s_fragsize; printf("Fragment size: %5d ",sb->s_fragsize); printf("inodes per alloc: %8d\n",sb->s_iagsize); printf("Frags per alloc: %5d ",sb->s_agsize); printf("Total Fragments: %8d\n",TotalFrags); printf("Total Alloc Grps: %5d ", TotalFrags/sb->s_agsize); printf("Max inodes: %8ld\n",NumberOfInodes(sb)); } else { printf("Total Alloc Grps: %5d ", (sb->s_fsize*512)/sb->s_agsize); printf("inodes per alloc: %8d\n",sb->s_agsize); printf("Max inodes: %8ld\n",NumberOfInodes(sb)); } PrintSep(); } void ReadInode( FILE *in, ulong StartInum, struct dinode *inode, ulong InodesPerAllocBlock, ulong AllocBlockSize) { off_t SeekPoint; long BlockNumber; int OffsetInBlock; static struct dinode I_NODES[PAGESIZE/DILENGTH]; ulong AllocBlock; ulong inum; static off_t LastSeekPoint=-1; AllocBlock=(StartInum/InodesPerAllocBlock); BlockNumber=(StartInum-(AllocBlock*InodesPerAllocBlock))/ (PAGESIZE/DILENGTH); OffsetInBlock=(StartInum-(AllocBlock*InodesPerAllocBlock))- (BlockNumber*(PAGESIZE/DILENGTH)); SeekPoint=(AllocBlock)? (BlockNumber*PAGESIZE)+(AllocBlock*AllocBlockSize): (BlockNumber*PAGESIZE)+(INODES_B*PAGESIZE); if (SeekPoint!=LastSeekPoint) { sync(); fseek(in,SeekPoint,SEEK_SET); fread(I_NODES,PAGESIZE,1,in); LastSeekPoint=SeekPoint; } *inode=I_NODES[OffsetInBlock]; } void DumpInodeContents( long inode, FILE *in, ulong InodesPerAllocBlock, ulong AllocBlockSize, ulong Mask, ulong Multiplier) { struct dinode DiskInode; ulong SeekPoint; char Buffer[4096]; ulong FileSize; int k; int BytesToRead; ulong *DiskPointers; int NumPtrs; ReadInode( in, inode, &DiskInode, InodesPerAllocBlock, AllocBlockSize); FileSize=DiskInode.di_size; if (FileSize>FOUR_MB) { /* Double indirect mapping */ } else if (FileSize>THIRTY_TWO_KB) { /* Indirect mapping */ SeekPoint=DiskInode.di_rindirect & Mask; SeekPoint=SeekPoint*Multiplier; DiskPointers=(ulong *)malloc(1024*sizeof(ulong)); fseek(in,SeekPoint,SEEK_SET); fread(DiskPointers,1024*sizeof(ulong),1,in); NumPtrs=1024; } else { /* Direct Mapping */ DiskPointers=&(DiskInode.di_rdaddr[0]); NumPtrs=8; } for (k=0;k<=NumPtrs && FileSize;k++) { SeekPoint=(DiskPointers[k] & Mask); SeekPoint=SeekPoint*Multiplier; BytesToRead=(FileSize>sizeof(Buffer))?sizeof(Buffer):FileSize; fseek(in,SeekPoint,SEEK_SET); fread(Buffer,BytesToRead,1,in); FileSize=FileSize-BytesToRead; write(1,Buffer,BytesToRead); } } void DumpInodeList( FILE *in, ulong MaxInodes, ulong InodesPerAllocBlock, ulong AllocBlockSize) { long inode; struct dinode DiskInode; struct tm *TimeStruct; printf(" Inode Links User Group Size ModDate\n"); printf("-------- ----- -------- -------- -------- -------\n"); for (inode=0;inode<=MaxInodes;inode++) { ReadInode( in, inode, &DiskInode, InodesPerAllocBlock, AllocBlockSize); if (DiskInode.di_mtime) { TimeStruct=localtime((long *)&DiskInode.di_mtime); printf("%8d %5d %8s %8s %8d %02d/%02d/%4d\n", inode, DiskInode.di_nlink, UserName(DiskInode.di_uid), GroupName(DiskInode.di_gid), DiskInode.di_size, TimeStruct->tm_mday, TimeStruct->tm_mon, TimeStruct->tm_year+1900); } } } void ExitWithUsageMessage() { fprintf(stderr,"USAGE: rsb [-i inode] [-d] [-s] \n"); exit(1); } main(int argc,char **argv) { FILE *in; struct superblock SuperBlock; short Valid; long inode=0; struct dinode DiskInode; ulong AllocBlockSize; ulong InodesPerAllocBlock; ulong MaxInodes; ulong Mask; ulong Multiplier; int option; int DumpSuperBlockFlag=0; int DumpFlag=0; while ((option=getopt(argc,argv,"i:ds")) != EOF) { switch(option) { case 'i': /* Inode specified */ inode=atol(optarg); break; case 'd': /* Dump flag */ DumpFlag=1; break; case 's': /* List Superblock flag */ DumpSuperBlockFlag=1; break; default: break; } } if (strlen(argv[optind])) in=fopen(argv[optind],"r"); else ExitWithUsageMessage(); if (in) { fseek(in,SUPER_B*PAGESIZE,SEEK_SET); fread(&SuperBlock,sizeof(SuperBlock),1,in); switch (SuperBlock.s_version) { case fsv3pvers: Valid=!strncmp(SuperBlock.s_magic,fsv3pmagic,4); InodesPerAllocBlock=SuperBlock.s_iagsize; AllocBlockSize= SuperBlock.s_fragsize*SuperBlock.s_agsize; Multiplier=SuperBlock.s_fragsize; Mask=0x3ffffff; break; case fsv3vers: Valid=!strncmp(SuperBlock.s_magic,fsv3magic,4); InodesPerAllocBlock=SuperBlock.s_agsize; AllocBlockSize=SuperBlock.s_agsize*PAGESIZE; Multiplier=SuperBlock.s_bsize; Mask=0xfffffff; break; default: Valid=0; break; } if (Valid) { if (DumpSuperBlockFlag==1) { AnalyseSuperBlock(&SuperBlock); } MaxInodes=NumberOfInodes(&SuperBlock); if (DumpFlag==1) { if (inode) DumpInodeContents(inode,in,InodesPerAllocBlock,AllocBlockSize,Mask,Multiplier); else DumpInodeList(in,MaxInodes,InodesPerAllocBlock,AllocBlockSize); } } else { fprintf(stderr,"Superblock - bad magic number\n"); exit(1); } } else { fprintf(stderr,"couldn't open "); perror(argv[optind]); exit(1); } } ---------------------------------------------------------------------------------------- Note 8: ---------------------------------------------------------------------------------------- http://wiki.yak.net/592 HOWTO rescue deleted Linux files | undelete | unremove | unrm | rm -v Here's how we rescued a LaTeX *.tex file that was accidentally removed on a Linux box. Stop doing anything else on the system. The idea is to use the disk as little as possible. (We stopped short of killing idle daemons, because we didn't want them scribbling stuff in log files. ) Know the first few bytes of the file you want. Hopefully they are fairly unique. The LaTeX document we wanted began with the characters "\document", so we used that pattern. Write a program that will read each sector from the raw partition (you must be root) (assuming 512 byte sectors is safest) and see if it begins with the pattern. If not, it loops and reads the next 512 bytes... If it finds it, it saves that sector and some fixed amount of following sectors (we did 600 more sectors, which is 300 KBytes) in a rescue file. Save probably twice as long a file as you think you're looking for. Save them to an extra partition -- or invoke "scp" or something to save them on another machine. (Usually ext2 & ext3 store files contiguously on disk -- especially if they are not too big & are written all at once.) The following TCL script did the job. Make it open the exact partition you want to scan. It needs another partition to write the rescue files to. grope.tcl # # This is in the language Tcl. # Usage: # tclsh scriptname < /dev/hda1 (the partition with the deleted file) # # Notice: change the MOUNT below to a different partition! # # Also fix the "string match" pattern -- we used \document for a LaTeX document. # # Occasinally sector numbers are written out, to indicate progress. # ( 1 sector == 512 bytes == 0.5KBytes ) set i 0 set n 0 fconfigure stdin -translation binary -encoding binary while true { set x [read stdin 512 ] if {$x==""} break if {[string match {\\document*} $x ]} { incr i puts stderr "SAVING $i" set f [open /WRITABLE_MOUNT_TO_SAVE_FILES_IN_GOES_HERE/rescue.$i w] fconfigure $f -translation binary -encoding binary puts -nonewline $f $x puts -nonewline $f [read stdin [expr 600*512] ] close $f } incr n if { ($n % 200000)==0 } { puts -nonewline stderr $n. } } Use "less" to examine the rescue files to see if you can find your data. Also the "strings" command is very good about extracting ASCII text portions. Even better, if you have physical access to the machine, shut down the system IMMEDIATELY and physically install its disk as an extra drive in another unix box. Do your scanning of the raw disk from there. (In our recent case, we didn't have access to this box.) Or boot a KNOPPIX CD (which will not write to any partitions unless you specifically mount them writeable from a root shell.) I've also used this kind of technique to rescue JPEG files from a digital camera's Compact Flash with a corrupted FAT file system. We wrote a program that started a new rescue file every time it found "JFIF" as the first 4 bytes of a sector, even if it was still saving the previous rescue file. We completely rescued about 3/4 of the images this way, and fragments of more. Obviously the data you are rescuing must be important enough to warrent this much trouble with no guarentee of successfull results. Your file could always have been overwritten, or it could be fragmented so you don't find the pieces. But the couple of times I've had to do this (for someone else's data!) we've had pretty good success. ---------------------------------------------------------------------------------------- Note 9: special case: text file edited with vi ---------------------------------------------------------------------------------------- If the file that was deleted, was a text file, and recently edited by vi, then there still might be a version available on your system. On most unix systems, vi keep tracks of former versions. Check /var/preserve/username (or similar directory: vi -r ) or a similar directory, depending on the unix version, where there still might exist a recent version of your text file. ---------------------------------------------------------------------------------------- Note 10: ---------------------------------------------------------------------------------------- Subject: Undelete of a file on AIX, using fsdb. Remark : Quite an elaborate procedure but it seems to work for small files. Important: Be carefull in using fsdb. Document: http://www.phase2.net/2008/03/04/aix-recovering-a-deleted-file-undelete/ -- Contents repeated here: This is a document I wrote a while back for work that I thought I would release in hopes that some people out there would find it useful. Preferably, you have a backup of the file system that you can use. If not, the filesystem you are about to try to to recover a file on must meet these requirements: No new files have been created on the filesystem. No files have been extended. The filesystem is able to be unmounted. It is a JFS filesystem, not JFS2 If so, then please, drink a few more beers and continue, but before you do… BACKUP THE CURRENT FILESYSTEM! Also, note that if you are dealing with a directory that has been deleted and would like to recover both the directory and the files under that directory, you should try Recovering a Deleted Directory ( a document I have yet to post.. ). It follows many of the same steps, but has some very important differences. Do not try and use this procedure to recover deleted directories and the files that were contained within them. You will mess up. Before we begin, I need to note a few things. I take no responsibility if this screws up your system. Use this at your own risk. Also, the example presented here is an actual representation of me recovering a deleted file, this is not just made up numbers. Also, this only works on jfs filesystems, not jfs2. The jfs2 fsdb is much different and I haven’t had a chance to play with it to determine the proper way of doing this. Now that I’ve said that, we can begin. We’ll use an example directory with some example files. Our directory is called /test and our filesystem is testlv, otherwise known as /dev/testlv. In our example, our Junior System Admin, Myron, has accidentally deleted a perl script called testfile.pl and needs to recover it. Note: If you are performing this operation on a filesystem while in maintenance mode, do NOT use option 1 when asked on how to mount the filesystems. ALWAYS use option 2, which specifies to start a shell before mounting the filesystems. Otherwise, the system will force a fsck -y on the filesystem and delete your files. Step 1. First, run this command: ls -id /testOutput: [test:/]# ls -id /test 2 /test/ This informs us that the inode for the directory /test is 2. Record this for future use. Step 2. Unmount /test umount /test Output: None We must unmount the directory. We don’t want anyone to try and use it while we are attempting to restore the file. Step 3 Now we’ll start up the filesystem debugger. fsdb /dev/testlv Output: [test:/]# fsdb /dev/testlv File System: /dev/testlv File System Size: 193200128 (512 byte blocks) Disk Map Size: 1660 (4K blocks) Inode Map Size: 831 (4K blocks) Fragment Size: 4096 (bytes) Allocation Group Size: 16384 (fragments) Inodes per Allocation Group: 8192 Total Inodes: 12075008 Total Fragments: 24150016 This starts the filesystem debugger on our testlv filesystem. Step 4 Now we look at our inode number. 2i Output: 2i i#: 2 md: d-g-rwxr-xr-x ln: 4 uid: 3 gid: 3 szh: 0 szl: 512 (actual size: 512) a0: 0x25d a1: 0x00 a2: 0x00 a3: 0x00 a4: 0x00 a5: 0x00 a6: 0x00 a7: 0x00 at: Mon Jan 10 11:19:17 2005 mt: Mon Jan 10 11:11:26 2005 ct: Mon Jan 10 11:11:26 2005 The INODE in the command is the inode number we recorded in step #1. This will display the inode information for the directory. The field a0 contains the block number of the directory. The following steps assume only field a0 is used. If a value appears in a1, etc, it may be necessary to repeat steps #5 and #6 for each block until the file to be recovered is found. Step 5 Move to the block a0b Output: a0b 0x000025d000 : 0x00000000 (0) This moves to the block pointed to by field “a0? of this inode. Step 6 Now we need to print out some data. p256c Output: p256c 0x000025d000: \0 \0 \0 \? \0 \? \0 \? . \0 \0 \0 \0 \0 \0 \? 0x000025d010: \0 \? \0 \? . . \0 \0 \0 \0 \0 \? \0 \? \0 \n 0x000025d020: l o s t + f o u n d \0 \0 \0 \0 \0 \? 0x000025d030: \0 $ \0 \? m e m _ r e p o r t _ 2 0x000025d040: 0 0 4 1 1 0 1 . d m p . g z \0 \0 0x000025d050: \0 \0 \0 \? \0 \s \0 \? o r a s c r a t 0x000025d060: c h . c p i o . g z \0 \0 \0 \0 \0 \? 0x000025d070: \0 ( \0 \s u s e r _ a c t i v i t 0x000025d080: y _ 2 0 0 4 1 1 0 1 . d m p . g 0x000025d090: z \0 \0 \0 \0 \0 \0 \? \0 , \0 ! u s e r 0x000025d0a0: _ a c t i v i t y _ d e t _ 2 0 0x000025d0b0: 0 4 1 1 0 1 . d m p . g z \0 \0 \0 0x000025d0c0: \0 \? ` \0 \? @ \0 \? E C R 1 X \0 \0 \0 0x000025d0d0: \0 \0 \0 \? \? 0 \0 \? t e s t f i l e 0x000025d0e0: . p l \0 \? \0 \a t e s t d i r \0 0x000025d0f0: j d u c k o . t x t \0 \0 \0 \0 \0 \? The command p256c stands for ‘print 256 bytes in character mode’. You could type ‘p128c’ and it would print 128 bytes in character mode and so on. The beginning left column is the address of the first character in that row. The important thing in this output is to find which line the file to be recovered is on. Our file ( testfile.pl ) is located on line 0×000025d0d0. Next, we have to find the address of the first character of our filename. To do this, starting at 0, count in hexidecimal until you reach the first character of the filename. In our example, the ‘t’ of testfile.pl is at address 0×000025d0d8. Record this address. If you cannot find your filename here, issue the command again. It will print the next 256 bytes in character mode. Do this until you find your filename. Here’s a layout to help you in figuring out how we got the address: Address: 0 1 2 3 4 5 6 7 8 9 A B C D E F 0×000025d0d0: \0 \0 \0 \? \? 0 \0 \? t e s t f i l eStep 7 Reset our position. a0b Output: a0b 0x000025d000 : 0x00000000 (0) This resets our position back to the beginning of the a0 block. This is necessary whenever you want to reprint out the byte data. Remember, however, that if you had to use the ‘p’ command many times to find your filename, you will probably have to use it many times each time you reset back to the beginning. Step 8 Print our data in decimal p256e Output: p256e 0x000025d000: 0 2 12 1 11776 0 0 2 0x000025d010: 12 2 11822 0 0 16 20 10 0x000025d020: 27759 29556 11110 28533 28260 0 0 17 0x000025d030: 36 26 28005 27999 29285 28783 29300 24370 0x000025d040: 12336 13361 12592 12590 25709 28718 26490 0 0x000025d050: 0 18 28 18 28530 24947 25458 24948 0x000025d060: 25448 11875 28777 28462 26490 0 0 19 0x000025d070: 40 29 30067 25970 24417 25460 26998 26996 0x000025d080: 31071 12848 12340 12593 12337 11876 28016 11879 0x000025d090: 31232 0 0 20 44 33 30067 25970 0x000025d0a0: 24417 25460 26998 26996 31071 25701 29791 12848 0x000025d0b0: 12340 12593 12337 11876 28016 11879 31232 0 0x000025d0c0: 18 24576 320 5 17731 21041 22528 0 0x000025d0d0: 0 21 304 11 29797 29556 26217 27749 0x000025d0e0: 11888 27648 288 7 29797 29556 25705 29184 0x000025d0f0: 27236 30051 27503 11892 30836 0 0 23 0x000025d100: 260 16 27233 28005 29549 24947 29537 29281 0x000025d110: 11892 30836 0 0 0 0 0 0 0x000025d120: 0 0 0 0 0 0 0 0 0x000025d130: 0 0 0 0 0 0 0 0 0x000025d140: 0 0 0 0 0 0 0 0 The command ‘p256e’ stands for ‘print 256 bytes in decimal word format’. This output can be helpful and confusing at the same time. First, find the beginning address that our file name is on. In our example, this was 0×000025d0d0. The line in decimal format reads: 0x000025d0d0: 0 21 304 11 29797 29556 26217 27749 For each file, assume the following: {ADDRESS}: x x x x x x x x x | | | | |---- filename -----| inode # --+----+ | | | +-- filename length record LENGTH --+ Note that the inode # may begin on any part of the line. The reason we print the data in decimal format is to help us determine where in the line the inode number is. There are several ways to help you do this, here are some: Count the number of characters in your filename, then try and find that number in our address line. ( eg: There are 11 characters in the filename ‘testfile.pl’. ) You can see on our line there is a matching number 11. Recount to the address 0×000025d0d8, assuming each column represents two numbers. The first column is 0 and 1. The second column is 2 and 3, then 4 and 5, etc. When you reach the column that matches your address, go back one column. The number in this column should match up with your filename length. Unless, of course, your filename is over 255 characters. Once you are sure you have the the correct column for your filename length, you are going to count back three more columns. This should put at the first column of the inode number. We’ll use our example decimal line to explain this more: 0x000025d0d0: 0 21 304 11 29797 29556 26217 27749 Like we mentioned before, testfile.pl is 11 characters. We find a matching number 11 in the 4th column. That means that the column with ‘304' is our record length field and the 0 and 21 columns make up our inode. Now, that we know which columns our inode is in ( columns 1 and 2 ), we must translate this number into our real inode number. Step 9 Reset our position again. a0b Output: a0b 0x000025d000 : 0x00000000 (0) Again, we have to reset our position back to the beginning because this time, we’re going to print the information in hex. Step 10 Print our data in hex. p256x Output: p256x 0x000025d000: 0000 0002 000C 0001 2E00 0000 0000 0002 0x000025d010: 000C 0002 2E2E 0000 0000 0010 0014 000A 0x000025d020: 6C6F 7374 2B66 6F75 6E64 0000 0000 0011 0x000025d030: 0024 001A 6D65 6D5F 7265 706F 7274 5F32 0x000025d040: 3030 3431 3130 312E 646D 702E 677A 0000 0x000025d050: 0000 0012 001C 0012 6F72 6173 6372 6174 0x000025d060: 6368 2E63 7069 6F2E 677A 0000 0000 0013 0x000025d070: 0028 001D 7573 6572 5F61 6374 6976 6974 0x000025d080: 795F 3230 3034 3131 3031 2E64 6D70 2E67 0x000025d090: 7A00 0000 0000 0014 002C 0021 7573 6572 0x000025d0a0: 5F61 6374 6976 6974 795F 6465 745F 3230 0x000025d0b0: 3034 3131 3031 2E64 6D70 2E67 7A00 0000 0x000025d0c0: 0012 6000 0140 0005 4543 5231 5800 0000 0x000025d0d0: 0000 0015 0130 000B 7465 7374 6669 6C65 0x000025d0e0: 2E70 6C00 0120 0007 7465 7374 6469 7200 0x000025d0f0: 6A64 7563 6B6F 2E74 7874 0000 0000 0017 0x000025d100: 0104 0010 6A61 6D65 736D 6173 7361 7261 0x000025d110: 2E74 7874 0000 0000 0000 0000 0000 0000 0x000025d120: 0000 0000 0000 0000 0000 0000 0000 0000 0x000025d130: 0000 0000 0000 0000 0000 0000 0000 0000 0x000025d140: 0000 0000 0000 0000 0000 0000 0000 0000 First, we find the line that begins with our address 0×000025d0d0. There it is! 0x000025d0d0: 0000 0015 0130 000B 7465 7374 6669 6C65 Next, find the two columns that we know our inode is in. For us, that’s column 1 and 2. Column 1 is all 0’s, so we can disregard it. Column 2, however, is 0015. Open up a calculator and translate 15 from hexidecimal to decimal. As you can see, this number turns into 21, which is our real inode number. Some of you may be asking why we just didn’t use the inode number from the decimal output in step 8. The reason is because it always isn’t always this easy. Take, for example, the address above ours. The directory ECR1X is on this address. Its inode number, like ours, is in columns 1 and 2. However, if you compare the lines between hexidecimal and decimal, you can immediately see the difference. Decimal: 0x000025d0c0: 18 24576 Hex: 0x000025d0c0: 0012 6000 If you translate 12600 from hexidecimal to decimal, the output is 1204224, which is the correct inode number for the ECR1X directory. If you can figure out how to translate 18 24576 into 1204224, please let me know and I’ll update this document. In any case, we now know the inode number of the missing file. We’re close to recovery! Step 11 We go to our new inode number 21i Output: 21i i#: 21 md: f---rw-r--r-- ln: 0 uid: 0 gid: 3 szh: 0 szl: 45 (actual size: 45) a0: 0xeff a1: 0x00 a2: 0x00 a3: 0x00 a4: 0x00 a5: 0x00 a6: 0x00 a7: 0x00 at: Mon Jan 10 14:16:40 2005 mt: Mon Jan 10 14:16:48 2005 ct: Mon Jan 10 14:16:53 2005 From this output, you can see that we have a file. Step 12 21i.ln=1 Output: 21i.ln=1 0x0000020a88 : 0x00000001 (1) This sets the link count of the file back to 1. You can verify this by reissuing the command from step #11 and noticing that the ‘ln’ field has incremented. 21i i#: 21 md: f---rw-r--r-- ln: 1 uid: 0 gid: 3 szh: 0 szl: 45 (actual size: 45) a0: 0xeff a1: 0x00 a2: 0x00 a3: 0x00 a4: 0x00 a5: 0x00 a6: 0x00 a7: 0x00 at: Mon Jan 10 14:16:40 2005 mt: Mon Jan 10 14:16:48 2005 ct: Mon Jan 10 14:16:53 2005 We have now told the filesystem that the link count for inode 21 should be 1. This means that there should be a filename pointing at this inode. This basically reverses what the OS actually does when deleting files. It doesn’t actually erase the file data, instead, it unlinks the filename from its inode number, effectively preventing you from seeing the data. Step 13 Quit. q Output: q [test:/]# This quits out of the fsdb. Step 14 Fsck our volume fsck /dev/testlv Output: [test:/]# fsck /dev/testlv ** Checking /dev/rtestlv (/test) ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts Unreferenced file I=21 owner=root mode=100644 size=45 mtime=Jan 10 14:16 2005 ; RECONNECT? y ** Phase 5 - Check Inode Map Bad Inode Map; SALVAGE? y ** Phase 5b - Salvage Inode Map ** Phase 6 - Check Block Map Bad Block Map; SALVAGE? y ** Phase 6b - Salvage Block Map 18 files 21893872 blocks 171306256 free ***** Filesystem was modified ***** This does a filesystem check on /dev/testlv. As you can see, it finds an inode claiming it is linked to, but no file that links to it. We answer ‘y’ to tell it to reconnect the inode to a filename, effectively giving us our file back! Step 15 Remount our directory. mount /test Output: None We must remount our filesystem to get back at our file. Step 16 Go into lost and found. It’s where all lost little kiddies go. Duh. cd /test/lost+found Output: None Our file is now located in lost+found. If you do an ‘ls’ in this directory, you will see something like the following: [test:/test/lost+found]# ls -l total 8 -rw-r–r– 1 root sys 45 Jan 10 14:16 21 And if we cat the file 21, we get the following: [test:/test/lost+found]# cat 21 #!/usr/bin/perl print “this is a test\n”; Ta-da! It’s Myron’s missing perl script! As a final aside, I will say that there may be different and much better ways of recovering files on AIX, however, this is the way I constructed from notes I found on various mailing lists and a few days of fooling around with it. So if you see some mistakes in this document or have some suggestions for better ways of doing this, please, let me know! I will happily update this document with better information as it is provided. I hope this helps some of you who have to deal with certain people who accidentally delete files on your systems. Nothing beats a good backup but when you don’t have one of those, this can always be used as a fallback. ---------------------------------------------------------------------------------------- Note 11: ---------------------------------------------------------------------------------------- Subject: Undelete of a file on AIX, using fsdb. http://faqs.cs.uu.nl/na-dir/aix-faq/part1.html Contents repeated here: RECOVERING REMOVED FILES AND DIRECTORIES IN A FILESYSTEM If a file is Deleted from the system, the filesytem blocks composing that file still exist, but are no longer allocated. As long as no new files are created or existing files extended within the same filesystem, the blocks will remain untouched. It is possible to reallocate the blocks to the previous file using the "fsdb" command (filesystem debugger). MAKE A BACKUP OF THE ENTIRE FILESYSTEM BEFORE PERFORMING THESE STEPS!!! ELSE ( BANG !!!!! ). It is possible to send a mail for have some informations ... Bernard.Kozyra@bull.net Steps to recover a deleted file ------------------------------- 1) "ls -id {dir}" (where dir is directory where file resided) Record INODE number for next step. 2) Unmount the filesystem. 3) "fsdb /{Mountpoint}" or "fsdb /dev/{LVname}" (where Mountpoint is the filesystem mount point, and LVname is the logical volume name of the filesystem) 4) "{INODE}i" (where INODE is the inode number recorded in step 1) This will display the inode information for the directory. The field a0 contains the block number of the directory. The following steps assume only field a0 is used. If a value appears in a1, etc, it may be necessary to repeat steps #5 and #6 for each block until the file to be recovered is found. 5) "a0b" (moves to block pointed to by field "a0" of this inode) 6) "p128c" (prints 128 bytes of directory in character format) Look for missing filename. If not seen, repeat this step until filename is found. Record address where filename begins. Also record address where PRIOR filename begins. If filename does not appear, return to step #5, and selecting a1b, a2b, etc. Note that the address of the first field is shown to the far left. Increment the address by one for each position to the right, counting in octal. 7) "a0b" (moves to block pointed to by field "a0" of this inode) If the filename was found in block 1, use a1b instead, etc. 8) "p128e" (prints first 128 bytes in decimal word format) Find the address of the file to recover (as recorded in step 6) in the far left column. If address is not shown, repeat until found. 9) Record the address of the file which appeared immediately PRIOR to the file you want to recover. 10) Find the ADDRESS of the record LENGTH field for the file in step #9 assuming the following format: {ADDRESS}: x x x x x x x x x x ... | | | | |-------- filename ------| inode # --+----+ | | | +-- filename length record LENGTH --+ Note that the inode number may begin at any position on the line. Note also that each number represents two bytes, so the address of the LENGTH field will be `{ADDRESS} + (#hops * 2) + 1' 11) Starting with the first word of the inode number, count in OCTAL until you reach the inode number of the file to be restored, assuming each word is 2 bytes. 12) "0{ADDRESS}B={BYTES}" (where ADDRESS is the address of the record LENGTH field found in step #10, and BYTES is the number of bytes [octal] counted in step #11) 13) If the value found in the LENGTH field in step #10 is greater than 255, also type the following: "0{ADDRESS-1}B=0" (where ADDRESS-1 is one less than the ADDRESS recorded in step #10) This is necessary to clear out the first byte of the word. 14) "q" (quit fsdb) 15) "fsck {Mountpoint}" or "fsck /dev/{LVname}" This command will return errors for each recovered file asking if you wish to REMOVE the file. Answer "n" to all questions. For each file that is listed, record the associated INODE number. 16) "fsdb /{Mountpoint}" or "fsdb /dev/{LVname}" 17) {BLOCK}i.ln=1 (where BLOCK is the block number recoded in step #15) This will change the link count for the inode associated with the recovered file. Repeat this step for each file listed in step #15. 18) "q" (quit fsdb) 19) "fsck {Mountpoint}" or "fsck /dev/{LVname}" The REMOVE prompts should no longer appear. Answer "y" to all questions pertaining to fixing the block map, inode map, and/or superblock. 20) If the desired directory or file returns, send money to the author of this document. ---------------------------------------------------------------------------------------- Note 12: ---------------------------------------------------------------------------------------- This note has some interresting feautures. You can't use it for all types of un-delete, but maybe you want to take a look. Original: http://lde.sourceforge.net/UNERASE.txt Here the contents is repeated: I imagine that most of the people initially using this package will be the ones who have recently deleted something. After all, that's what finally inspired me to learn enough about the different file systems to write this package. Undelete under unix really isn't that hard, it really only suffers the same problems that DOS undelete does which is -- you can't recover data that someone else has just overwritten. If you are quick and have very few users on your system there is a good chance that the data will be intact and you can go ahead with a successful undelete. I don't recommend using this package to undelete your /usr/bin directory or really any directory, but if you have trashed a piece of irreplaceable code or data, undelete is where it's at. If you can reinstall or have recent backups I'd recommend you try them. But it's up to you, besides, sometimes playing with lde/undelete for a while is a lot more fun than going back and recoding a few hours worth of lost work. Before I tell you how to undelete stuff, have a look at doc/minix.tex (or the ps or dvi version). Even if you aren't using a minix file system, read it carefully, it will get you used to the terms and the general idea behind things here. These are the steps for a successful undelete: ######################### STEP ONE ################################## Unmount the partition which has the erased file on it. If you want to, you can remount it read-only, but it isn't necessary. NOTE: lde does some checks to see if the file system is mounted, but it does not check if it was mounted read-only. Some functions will be deactivated for any (read-only or read/write) mounted partition. ######################### STEP TWO ################################## Figure out what you want to undelete. If you know what kind of file you are looking for (tar file, compressed file, C file), finding it will be a lot easier. There are a few ways to look for file data. lde supports a type search and a string search for data at the beginning of a file. Currently, the supported types include gz (gzip), tgz (tarred gzip file), and script (those beginning with "#!/"). ---- EXAMPLE ---- String search (search for a PKzip file - starts with PK, -O 0 not required): lde -S PK -O 0 /dev/hda1 String search (search for JPEG files - JIFF starts at byte 6): lde -S JIFF -O 6 /dev/hda1 Type search (search for a gzipped tar file): lde -T tgz /dev/hda1 ------------------- When searching by type, you can also include the filename; the desired pattern will be extracted from the file. You should specify an offest (-O) and length (-L) when using this option. This option was included to make generalized searches easier. You can find pattern, length, and offset information in /etc/magic which you can use to generate your own template files, or specify lengths and offsets so that existing files may be used as templates. ---- EXAMPLE ---- Type search (search for core file - see /etc/magic to determine -O/-L): lde -T /proc/kcore -O 216 -L 4 /dev/hda1 ----------------- If you add --recoverable to the command line, it will check to see if another active inode uses any blocks in this inode. If no blocks are marked used by another inode, "recovery possible" will be printed. If blocks are used by another file "recovery NOT possible" will be printed to the screen. You may still be able to get some data back even when it reports that recovery is not possible. To get an idea of how many blocks are in use, you will have to check its recoverablilty from lde via its curses interface. ---- EXAMPLE ---- ./lde --paranoid -T script --ilookup --recoverable /dev/hda5 ---- OUTPUT ---- Paranoid flag set. Opening device "/dev/hda5" read-only. User requested autodetect filesystem. Checking device . . . Found ext2fs on device. Match at block 0x107, check inode 0xB, recovery possible. Match at block 0x421E7, no unused inode found. ----------------- When you run lde in these mode, it will report a block (and inode if you are lucky and used the --ilookup flag) where a match was found. Take this inode number and go to step (3). If lde doesn't report anything on its own, or the search detailed above does not suit your needs, you can use grep to search the partition for data and pipe it through lde which will attempt to find a block and inode again. The recommended procedure (all this can go on one line, the '\' indicates continuation) is: grep -b SEARCH DEVICE | awk '{FS = ":" } ; {print $1 }' | \ lde ${LDE_OPT} --grep DEVICE A shell script (crash_recovery/grep-inode) is included that will do this for you. grep-inode [grep_options] search_string device ---- EXAMPLE ---- grep-inode -i MyDevelopment.h /dev/hda1 ----------------- If none of these search methods are productive, you can page through the disk with an editor (emacs /dev/hda2) or the preferred choice might be to page through it with lde. Fire up lde and go into block mode (hit 'b') then use PG_UP/PG_DN to flip through all the blocks until you find one you like. Hitting '^R' while displaying the block will attempt to find an inode which references the block. ######################## STEP THREE ################################# If you have an inode number, things are looking good. Go into inode mode and display this inode. Then hit 'R' (use capital 'R') to copy the inode information to the recovery block list and enter recovery mode. Now hit 'R' again and lde will prompt you for a file name (you can include a full path). Make sure you write it to a FILE SYSTEM OTHER THAN THE ONE WHICH THE DELETED FILE RESIDES ON or you will probably overwrite it as you go. One day, when lde supports disk writes, it will be able to undelete the file to its original location, but for now this is safer. The recovered file will be a little larger than the original as the last block will be padded with zeroes (or whatever was on the disk at the end of the last block). If you did find an inode for the deleted file, you can copy its old size to the new inode by using lde to edit the two inodes (don't use lde's copy/paste as it will copy the entire inode and undo all the work you just did to restore the file). ###################### OTHER OPTIONS ################################ If you were unable to find an intact inode, things are going to be tough. You will have to find all the blocks in the file in order. If your disk is relatively unfragmented, you can hopefully find everything in order or close by at least. Currently, you have to tag all the direct blocks, then find the indirect blocks and tag them. If the indirect block was wiped or you are unable to find it, you've got a lot of work to do. You can copy individual blocks one at a time to the recovery file by using 'w' in block mode. Display the next block in the file, hit 'w', then enter the filename (if you hit enter, the last filename will be reused and the block will be appended to the file). lde will always ask if you want to append, overwrite, or cancel when a file exists. You can override this by setting the append flag from the flags menu ('f' from most modes). If you find any type of indirect block, you can copy it to the recovery inode in its corresponding position and recover a whole bunch of blocks at once. Leave the direct blocks filled with zeros. Another option is to use dd. Real programmers still probably use emacs and dd to hack a fs. ;) If you know there are a bunch (one or more) of contiguous blocks on the disk, you can use the unix command dd to copy them from the device to a file. ---- EXAMPLE ---- To copy blocks 200-299 from the device /dev/hda1 to /home/recover/file1: dd if=/dev/hda1 of=/home/recover/file1 bs=1024 count=100 skip=200 if input file or device of output file or device bs blocksize (will be 1024 for most linux fs's) count number of blocks to copy skip number of blocks to skip from the start of the device ----------------- Read the dd man page for more info. #################### ABOUT INDIRECT BLOCKS ########################## [ Mail from to an lde user ] > 1 - install a routine that lets you read what the indirect blocks > are pointing to in the chain, I mean, I know that file X has 2 > indirect blocks but what blocks do these point to and how do I find > out? This is hard to describe, but if you have figured out how to use inode mode any you are looking at the blocklist contained in that inode (it should list all the direct blocks and the 1x, 2x, and 3x indirect blocks), when you hit 'B' when the cursor is sitting on the 1x indirect block, it will take you to that block in block mode, then each entry in that block (most likely each entry is 4 bits -- as in the ext2 fs) points to another block in the chain. I.E. INDIRECT BLOCK: 0x000200 Now look at block 0x000200 0000: 01 00 00 00 02 00 00 00 : 04 04 04 00 10 01 00 00 This would indicate the the next 4 blocks in the file are 0x00000001, 0x00000002, 0x00040404, 0x00000110 The same is true for double indirect blocks, but the double indirect blocks contains pointers to more indirect block which you must then look up as above. That was a pretty lousy explaination, someday I do plan to add a feature where you may view all the blocks in a file without doing the indirect indexing yourself. For now, lde is mostly a crutch for last ditch efforts at file recovery, but I'm glad if people find other uses for it. ################# RECOVERING WITHOUT INODES ####################### [ This is mail to a person who was unable to find an inode, it gives some last ditch suggestions before giving up. ] In a perfect world, or on a virgin disk, everything would be sequential. But with things like unix and (network) file sharing, many people can write to the disk at the same time, so the blocks can get interleaved. Also depending on the free space situation of the disk, the two free blocks may not exist sequentially on the disk. Also, there are file "holes" in ext2 where there are block pointers of zero on the disk. Normally an indirect block would point to 256 direct blocks, but with zero entries it may be less than this. If things are perfect, here is how I imagine your disk is layed out: Direct blocks 1-9: you already know where these are and they are in that tiny recovery file (9k). These were not sequential, so it makes me wonder if the rest of the bytes will be layed out in order. Indirect block: This takes up one block and ideally your data would start right after it. 256 blocks of data: 2x indirect block: Should only have one entry, pointing to the next block on the disk indirect block: pointed to by the 2xindirect block 88 blocks of data: So my last ditch recommendation is to use dd to copy the blocks off the disk and then cat all the dd'ed files together. 0x5e65e - 0x5e660 | 0x61a72 | 0x5e661 +-- These are the direct blocks, you could 0x61ad4 | use the lde recovered file instead of 0x5e662 - 0x5e664 | dd + cat. 0x5e665 - 0x5e764 - 256 blocks of data 0x5e750 - 0x5e7a8 - 88 blocks of data Things look bad becuse the numbers are out of sequence (those 256 blocks of data should end right before the 2x indirect block at 0x5e74 there's 0x10 blocks unaccounted for (maybe this is just some of the ext2 file system data which is dispersed about the disk -- it could fall anywhere in that data range if it's there). So try: ---- EXAMPLE ---- lde (recover direct blocks to /home/recover/block1) dd if=/dev/sdb1 of=/home/recover/block2 bs=1024 count=256 skip=386661 dd if=/dev/sdb1 of=/home/recover/block3 bs=1024 count=88 skip=386896 cat block.1 block2 block3 > access_file.dos ----------------- #################### TRIPLE INDIRECT BLOCKS ######################### [ This is a response to one persons request for immediate help recovering a very large file -- the stuff about the triple block having _three_ entries was specific to this persons problem. In general though, the triple indirect block will not have very many entries, so this method might be viable until I get things together and write in the triple indirect block support. ] lde allows you to append a single block to the recover file (use 'w' from block mode) -- you can page through the triple indirect blocks to figure out the block order and then write each block to the recover file. I.e. after piecing things together from the triple indirect block, you should have a list of all the blocks in the file, now display the first block on the screen, write it to the file, display the second block, write it to the file . . . I really don't think it's worth it for 145,000 blocks though. The semi-automated way to do this is to make some fake inodes. The triple indirect inode should be pretty empty - maybe 3 entires. Each of these entries points to a double indirect block. Solution: 1) Recover any direct/indirect/double indirect blocks in the original inode to a file. Do this with lde. 2) Look at the triple indirect block. It should have 3 entries. Write down the 3 double indirect blocks listed here. 3) Use the recover mode fake inode, fill in all entires with zeroes. Now fill in the 1st double indirect block that you wrote down in step 2 in the slot for the 2x indirect block. 4) Execute a recover, dump it to a file, say "file1". Repeat step 3 with the other two double indirect inodes from step 2. 5) Now you should have 4 files, catenate them all together and with any luck, it will un-tar. ---------------------------------------------------------------------------------------- Note 13: ---------------------------------------------------------------------------------------- >>> Some tools or info that might be usefull: 1. Midnight Commander is GNU (free) software that runs on UNIX based operating systems. At the time of writing, the undelete feature only works on ext2 filesystems. Midnight Commander can be obtained at http://www.ibiblio.org/mc/ 2. Opensource forensic: http://www.opensourceforensics.org/tools/unix.html 3. R-Linux, recovery and undelete tool for Ext2 fs http://3d2f.com/tags/undelete/recover/unix/ 4. http://foremost.sourceforge.net/ Also take a look at Tom Pycke, Recovering Files in Linux, available at www.recover.source.net/linux 5. R-Linux 1.0 Data Recovery and Undelete Tool for Ext2FS (Linux) file system. http://www.supershareware.com/info/r-linux.html 6. Compunix AIX undelete tool: http://www.compunix.com/prod/analyse.html http://www.compunix.com/eval/list.html 7. Check out a tool called "Lazarus" which can work in combination with unrm 8. For Linux (ext2, ext3 fs) and Solaris (ufs fs) R-Tools technology: Undelete tool for Linux and Solaris: http://www.data-recovery-software.net/ 9. Solaris undelete tools: -- Kernel Recovery for Solaris Sparc http://www.download.com/Kernel-Recovery-for-Solaris-Sparc/3000-2248_4-10578170.html http://www.download3k.com/Press-Launch-of-Kernel-Recovery-for-Solaris-SPARC.html http://www.tucows.com/preview/505583 http://www.programurl.com/kernel-recovery-for-solaris-sparc.htm Nucleus Technologies.com: http://www.nucleustechnologies.com -- Other Solaris Data Recovery Software: http://solaris-data-recovery-software.qarchive.org/ R-Tools technology: Undelete tool for Linux and Solaris: http://www.data-recovery-software.net/ 10. General info on undelete intentions on ext2 fs: http://amadeus.uprm.edu/~undelete/Presentacion.html 11. Patents on undelete feature in Unix (requires a change in how inodes are freed) http://www.patentstorm.us/patents/6615224.html http://www.freepatentsonline.com/6615224.html ############################################################### 4. OTHER STUFF: ############################################################### ---------------------------------------------------------------------------------------- Note 1: ---------------------------------------------------------------------------------------- Carefull in using "utilities" in removing accounts and other items. The following story explains it all: From: dbrillha@dave.mis.semi.harris.com (Dave Brillhart) Organization: Harris Semiconductor We can laugh (almost) about it now, but... Our operations group, a VMS group but trying to learn UNIX, was assigned account administration. They were cleaning up a few non-used accounts like they do on VMS - backup and purge. When they came across the account "sccs", which had never been accessed, away it went. The "deleteuser" utility fom DEC asks if you would like to delete all the files in the account. Seems reasonable, huh? Well, the home directory for "sccs" is "/". Enough said :-( (Note: funny story, but filemodes or permissions should actually make this impossible) ---------------------------------------------------------------------------------------- Note 2: ---------------------------------------------------------------------------------------- You already have seen some examples of using the dd and od commands. These commands are available on almost all unix versions. They are extremely powerfull, and could be very dangerous also, if not used properly. Because you can dump any diskblock, or blocks from tape, to any output, with possible conversion of data, you might even recover data which would otherwise be considered as lost. The following article is very instructive on how to use the dd command. http://www.codecoffee.com/tipsforlinux/articles/036.html >> How and when to use the dd command? In this article, Sam Chessman explains the use of the dd command with a lot of useful examples. This article is not aimed at absolute beginners. Once you are familiar with the basics of Linux, you would be in a better position to use the dd command. The ' dd ' command is one of the original Unix utilities and should be in everyone's tool box. It can strip headers, extract parts of binary files and write into the middle of floppy disks; it is used by the Linux kernel Makefiles to make boot images. It can be used to copy and convert magnetic tape formats, convert between ASCII and EBCDIC, swap bytes, and force to upper and lowercase. For blocked I/O, the dd command has no competition in the standard tool set. One could write a custom utility to do specific I/O or formatting but, as dd is already available almost everywhere, it makes sense to use it. Like most well-behaved commands, dd reads from its standard input and writes to its standard output, unless a command line specification has been given. This allows dd to be used in pipes, and remotely with the rsh remote shell command. Unlike most commands, dd uses a keyword=value format for its parameters. This was reputedly modeled after IBM System/360 JCL, which had an elaborate DD 'Dataset Definition' specification for I/O devices. A complete listing of all keywords is available from GNU dd with $ dd --help Some people believe dd means ``Destroy Disk'' or ``Delete Data'' because if it is misused, a partition or output file can be trashed very quickly. Since dd is the tool used to write disk headers, boot records, and similar system data areas, misuse of dd has probably trashed many hard disks and file systems. In essence, dd copies and optionally converts data. It uses an input buffer, conversion buffer if conversion is specified, and an output buffer. Reads are issued to the input file or device for the size of the input buffer, optional conversions are applied, and writes are issued for the size of the output buffer. This allows I/O requests to be tailored to the requirements of a task. Output to standard error reports the number of full and short blocks read and written. Example 1 A typical task for dd is copying a floppy disk. As the common geometry of a 3.5" floppy is 18 sectors per track, two heads and 80 cylinders, an optimized dd command to read a floppy is: Example 1-a : Copying from a 3.5" floppy dd bs=2x80x18b if=/dev/fd0 of=/tmp/floppy.image 1+0 records in 1+0 records out The 18b specifies 18 sectors of 512 bytes, the 2x multiplies the sector size by the number of heads, and the 80x is for the cylinders-- a total of 1474560 bytes. This issues a single 1474560-byte read request to /dev/fd0 and a single 1474560 write request to /tmp/floppy.image, whereas a corresponding cp command cp /dev/fd0 /tmp/floppy.image issues 360 reads and writes of 4096 bytes. While this may seem insignificant on a 1.44MB file, when larger amounts of data are involved, reducing the number of system calls and improving performance can be significant. This example also shows the factor capability in the GNU dd number specification. This has been around since before the Programmers Work Bench and, while not documented in the GNU dd man page, is present in the source and works just fine, thank you. To finish copying a floppy, the original needs to be ejected, a new diskette inserted, and another dd command issued to write to the diskette: Example 1-b : Copying to a 3.5" floppy dd bs=2x80x18b < /tmp/floppy.image > /dev/fd0 1+0 records in 1+0 records out Here is shown the stdin/stdout usage, in which respect dd is like most other utilities. Example 2 The original need for dd came with the 1/2" tapes used to exchange data with other systems and boot and install Unix on the PDP/11. Those days are gone, but the 9-track format lives. To access the venerable 9-track, 1/2" tape, dd is superior. With modern SCSI tape devices, blocking and unblocking are no longer a necessity, as the hardware reads and writes 512-byte data blocks. However, the 9-track 1/2" tape format allows for variable length blocking and can be impossible to read with the cp command. The dd command allows for the exact specification of input and output block sizes, and can even read variable length block sizes, by specifying an input buffer size larger than any of the blocks on the tape. Short blocks are read, and dd happily copies those to the output file without complaint, simply reporting on the number of complete and short blocks encountered. Then there are the EBCDIC datasets transferred from such systems as MVS, which are almost always 80-character blank-padded Hollerith Card Images! No problem for dd, which will convert these to newline-terminated variable record length ASCII. Making the format is just as easy and dd again is the right tool for the job. Example 2 : Converting EBCDIC 80-character fixed-length record to ASCII variable-length newline-terminated record dd bs=10240 cbs=80 conv=ascii,unblock if=/dev/st0 of=ascii.out 40+0 records in 38+1 records out The fixed record length is specified by the cbs=80 parameter, and the input and output block sizes are set with bs=10240. The EBCDIC-to-ASCII conversion and fixed-to-variable record length conversion are enabled with the conv=ascii,noblock parameter. Notice the output record count is smaller than the input record count. This is due to the padding spaces eliminated from the output file and replaced with newline characters. Example 3 Sometimes data arrives from sources in unusual formats. For example, every time I read a tape made on an SGI machine, the bytes are swapped. The dd command takes this in stride, swapping the bytes as required. The ability to use dd in a pipe with rsh means that the tape device on any *nix system is accessible, given the proper rlogin setup. Example 3 : Byte Swapping with Remote Access of Magnet Tape rsh sgi.with.tape dd bs=256b if=/dev/rmt0 conv=swab | tar xvf - The dd runs on the SGI and swaps the bytes before writing to the tar command running on the local host. Example 4 Murphy's Law was postulated long before digital computers, but it seems it was specifically targeted for them. When you need to read a floppy or tape, it is the only copy in the universe and you have a deadline past due, that is when you will have a bad spot on the magnetic media, and your data will be unreadable. To the rescue comes dd, which can read all the good data around the bad spot and continue after the error is encountered. Sometimes this is all that is needed to recover the important data. Example 4 : Error Handling dd bs=265b conv=noerror if=/dev/st0 of=/tmp/bad.tape.image Example 5 The Linux kernel Makefiles use dd to build the boot image. In the Alpha Makefile /usr/src/linux/arch/alpha/boot/Makefile, the srmboot target issues the command: Example 5 : Kernel Image Makefile dd if=bootimage of=$(BOOTDEV) bs=512 seek=1 skip=1 This skips the first 512 bytes of the input bootimage file (skip=1) and writes starting at the second sector of the $(BOOTDEV) device (seek=1). A typical use of dd is to skip executable headers and begin writing in the middle of a device, skipping volume and partition data. As this can cause your disk to lose file system data, please test and use these applications with care. ---------------------------------------------------------------------------------------- Note 3: ---------------------------------------------------------------------------------------- od Command Purpose Displays files in a specified format. dump files in octal and other formats Syntax To Display Files Using a Type-String to Format the Output od [ -v ] [ -A AddressBase ] [ -N Count ] [ -j Skip ] [ -t TypeString ... ] [ File ... ] type is a string of one or more of the below type indicator characters. If you include more than one type indicator character in a single type string or use this option more than once, od writes one copy of each output line using each of the data types that you specified, in the order that you specified. a named character c ASCII character or backslash escape d signed decimal f floating point o octal u unsigned decimal x hexadecimal C char S short I int L long For floating point (f): F float D double L long double Examples: >> To display a file in octal, a page at a time, enter: od a.out | pg This command displays the a.out file in octal format and pipes the output through the pg command. >> To translate a file into several formats at once, enter: od -t cx a.out > a.xcd This command writes the contents of the a.out file, in hexadecimal format ( x) and character format ( c), into the a.xcd file. >> To start displaying a file in the middle (using the first syntax format), enter: od -t acx -j 100 a.out This command displays the a.out file in named character ( a), character ( c), and hexadecimal ( x) formats, starting from the 100th byte. >> To start in the middle of a file (using the second syntax format), enter: od -bcx a.out +100. This displays the a.out file in octal-byte ( -b), character ( -c), and hexadecimal ( -x) formats, starting from the 100th byte. The . (period) after the offset makes it a decimal number. Without the period, the output would start from the 64th (100 octal) byte. % dir | od -c | more % cat my_file | od -c |more % od my_file |more Comparison of different outputs: >> Show 16 first characters from a binary file (/bin/sh) as ASCII characters or backslash escapes (octal): % od -N 16 -c /bin/sh output: 0000000 177 E L F 001 001 001 \0 \0 \0 \0 \0 \0 \0 \0 \0 >> Show the same binary as named ASCII characters: % od -N 16 -a /bin/sh output: 0000000 del E L F soh soh soh nul nul nul nul nul nul nul nul nul >> Show the same binary as short hexcadecimals: % od -N 16 -t x1 /bin/sh output: 0000000 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 >> Show the same binary as octal numbers: % od -N 16 /bin/sh output: % 0000000 042577 043114 000401 000001 000000 000000 000000 000000