Release Notes for TrueZIP 6.5.4
(February 27th, 2007)
Introduction
This is a feature and update release. Upgrading is recommended for all users.
Support for OpenDocument Format (ODF)
Most notably, support for reading and writing OpenDocument Format (ODF) files has been added. Here's an example:
// Recognize ODF text documents as created by OpenOffice Writer, e.g. File.setDefaultArchiveDetector(new DefaultArchiveDetector("odt")); File document = new File("helloworld.odt"); File content = new File(document, "content.xml"); try { FileInputStream in = new FileInputStream(document); try { // Init DOM parser here... } finally { in.close(); // ALWAYS close the stream! } } catch (IOException ex) { ex.printStackTrace(); // maybe document is a false positive }
If you want to create or modify an ODF file, make sure to create or overwrite the entry manifest first in order to provide best performance. Otherwise, the ODF archive driver buffers all output in temporary files until either this entry is written or the archive file gets unmounted. This is because ODF files must always start with this entry.
It is hoped that TrueZIP 7 alias TrueVFS can enhance the support for ODF files so that encrypted ODF files can be read and written transparently, too (just like RAES encrypted ZIP files are today). For this to work, TrueVFS would need to take full control over the entry META-INF/manifest.xml and probably hide its existance from the client application.
Creation and Ordering of Archive Entries
Directory archive entries are now only created in the output archive file if they have been explicitly created with File.mkdir() or their last modification time has been modified with File.setLastModified(long). Otherwise, they will just exist as so-called ghost directories in the virtual file system. Just like before, ghost directories can also be read from the input archive file and return 0L upon a call to File.lastModified(). This mimics the behavior of most archive tools, which do not write directory entries at all. The concept of ghost directories is now documented in detail in the Javadoc for the package de.schlichtherle.io.
When reading and writing archive files, the order of their entries is now preserved due to a change from HashMap to LinkedHashMap in some classes. Note that directory entries are still written after all file entries, if at all (see above).
These changes imply that when copying archive files, the order of the entries in the destination now more closely resembles the order in the source and that missing directory entries in the source (called ghost directories) are no longer created in the destination.
Concurrency
The ZIP driver family now supports concurrent writing of archive entries, just like the TAR driver family did already. Previously, an OutputArchiveBusyException was thrown. Don't expect wonders however: An archive file can still only get written as an output stream. So only the archive entry output stream which was first created will directly write to the archive file output stream. All others will write to a temporary file which will be copied to the archive file output stream when the archive entry output stream is closed and no other streams are busy or when the archive file gets unmounted (which closes all archive entry streams first).
Of course, thread safety has been unaffected by these changes: TrueZIP's virtual file system operations were and are virtually atomic. While not new, this concept is now documented in detail in the Javadoc for the package de.schlichtherle.io.
Miscellaneous
Great efforts have been put into the Javadoc. There is also a new tutorial/manual on the web site which covers the frequently asked questions. I hope you appreciate.
As usual, some minor optimizations and bugs have been fixed and the unit tests have been extended to cover regressions for the fixes in future releases. TrueZIP 6.5 has been analyzed with FindBugs in order to eliminate potential bugs. All remaining issues have been checked to be false positives.
Starting from this release, the precompiled JAR contains full debugging information. This should make a developer's life a bit easier: While it's easy to use tools like ProGuard to strip the debugging information if you don't want it, you don't need to do the more complicated build from the sources anymore if you need it.
Legend | |
---|---|
New | Introduces a new feature. |
Fixed | Introduces a bug fix of an existing feature. |
Enhanced | Introduces the enhancement of an existing feature. This update is fully backwards compatible. |
Changed | Introduces the change of an existing feature. This update may cause backwards incompatibilities . |
Deprecated | Introduces the deprecation of an existing feature. |
List of Updates (Change Log)
Following is an overview of all updates in this release which affect the public API. Please note that internal refactorings are not listed. For a full list of updates, please refer to the CVS repository and diff to the tag TrueZIP-6_4.
Updates in the Package de.schlichtherle.crypto.io
- Fixed: CipherReadOnlyFile.read() returned a negative integer for bytes in the inclusive range 128 to 255.
Updates in the Package de.schlichtherle.io
- Fixed: File.deleteAll() bypassed the internal state associated with archive files.
- Fixed: The copy operations in the File class threw an IOException instead of a FileNotFoundException if a file archive entry should get copied over a directory archive entry.
- Fixed: The package internal ArchiveControllers class now forces the key manager singleton to get loaded and instanctiated at startup rather than on demand. With on-demand key manager instantiation, this may have been deferred until the JVM shutdown hook is run. However, some environments (app servers) inhibit class loading in shutdown hooks.
- Fixed: ChainableIOException.initCause(Throwable) now throws a ClassCastException if parameter is not an IOException.
- Changed: Newly created directory entries are now written to an archive file only if they have been created with File.mkdir() or their last modification time has been set with File.setLastModified(long). This mimics the behavior of most archive tools, which do not write directory entries.
- Changed: The method DefaultArchiveDetector.getArchiveDriver(String) now throws a RuntimeException if an archive driver class cannot get loaded or instantiated. Previously, a WARNING message was logged using java.util.logging and null was returned.
- Enhanced: When copying or renaming archive files, ghost directories in the source are now retained as ghost directories in the destination instead of being created as regular directories.
- Enhanced: When an archive file is automatically unmounted, the archive controller now waits for all entry streams to get closed instead of failing with an exception if any open entry streams are present. This is supported by the auto-close feature of entry streams, which closes an entry stream if gets picked up by the garbage collector.
- Enhanced: The file system now enumerates all entries in order via LinkedHashMap. This causes the order of entries to be preserved when copying archives, with the exception that all file entries are written before their directory entries.
- Enhanced: Archive drivers are now already created when constructing a DefaultArchiveDetector rather than when getArchiveDetector(String) is called in order to make the class fail early. This does not affect loading the global registry from configuration files on the class path. Another positive side effect is that there doesn't need to exist a separate archive driver for any archive file suffix any more, which slightly reduces the memory footprint. However, archive drivers are still not singletons.
- Enhanced: Removed redundant call to System.runFinalization() in ArchiveControllers.umount().
- Enhanced: Setting the system property de.schlichtherle.io.strict to true now causes File.isLenient() to return false by default.
- Enhanced: (In|Out)putArchiveMetaData don't trigger garbage collection and finalization any more if File.isLenient() returns false.
- Enhanced: The system property de.schlichtherle.io.registry can now be set to a list of relative paths which are separated by path separators (';' on Windows, ':' on Unix). These relative paths are then searched for configuration files on the class path.
- Enhanced: The system property de.schlichtherle.io.default can now be set to the list of archive file suffixes recognized by default. This overrules the DEFAULT keyword found in configuration files.
Updates in the Package de.schlichtherle.io.archive.spi
- Fixed: AbstractArchiveDriver did not initialize the thread local encoder when deserialized.
- Changed: Deprecated OutputArchive.storeDirectory(), since the same can be done with OutputArchive.getOutputStream() and a subsequent OutputStream.close() on the returned stream.
- New: The new class MultiplexedOutputArchive provides concurrent writing of multiple archive entries by decorating arbitrary OutputArchive implementations. This eases the task of implementing an archive driver and benefits the client application because it allows concurrent writing of archive entries by multiple threads. Note that the TAR driver family already provided this feature before via a custom implementation. In this version, the implementation has been refactored to use the new class.
Updates in the Package de.schlichtherle.io.archive.tar
- Fixed: TarInputStream did not recover all entry attributes properly. This resulted in unwanted changes of the attributes when a TAR file was updated.
- Fixed: If a TAR file contained multiple entries for the same name, some temporary files may have been left.
- Fixed: Entries created with new TarEntry(String) did not return -1 as their last modification time. This did not affect previous releases, but would have affected this release due to update #1 in the package de.schlichtherle.io.
- Fixed: The TAR driver left a temporary file if a corrupted TAR file was read.
- Fixed: The TAR driver now produces GNU tar compatible file headers for long file names instead of allowing the class TarOutputStream in the package org.apache.tools.tar from ant.jar to throw an (undocumented) RuntimeException.
Updates in the Package de.schlichtherle.io.archive.zip
- New: The classes OdfDriver and OdfOutputArchive support reading and writing of OpenDocument Format (ODF) files. For more information, please refer to the Javadoc for the driver class.
- Enhanced: The ZIP driver family now supports concurrent writing of multiple archive entries. See update #3 in the package de.schlichtherle.io.archive.spi.
Updates in the Package de.schlichtherle.io.rof
- Fixed: BufferedReadOnlyFile.read() returned a negative integer for bytes in the inclusive range 128 to 255.
- Fixed: ChannelReadOnlyFile.read() returned EOF after first use.
- Fixed: ChannelReadOnlyFile.seek(long) threw IllegalArgumentException instead of IOException on negative offset.
- Fixed: The class MemoryMappedReadOnlyFile did not work.
Updates in the Package de.schlichtherle.io.swing
- Fixed: Illegal call to FileSystemView.getSystemIcon(java.io.File) for nonexistant file argument.
Updates in the Package de.schlichtherle.io.swing.tree
- Fixed: FileTreeModel.nodeRemoved(java.io.File) did not properly notify the tree of updates.
Updates in the Package de.schlichtherle.key
- Fixed: AbstractKeyProvider.getCreateKey() and AbstractKeyProvider.getOpenKey() might have returned null instead of throwing an UnknownKeyException.
- Changed: Moved PromptingKeyManager.getKeyProvider(String) implementation to its abstract definition in the KeyManager base class and deprecated it. The client application cannot get affected adversely.
Updates in the Package de.schlichtherle.util
- New: The class de.schlichtherle.util.CanonicalStringSet is a convenient and powerful means to operate with expressions such as "ear|jar|war|zip".
Updates in the Package de.schlichtherle.util.zip
- Fixed: The byte array for extra data in this class is now copied on any call to the getter or setter.
- Fixed: BasicZipFile.RawCheckedInputStream.read() did not provide a dummy byte to the inflater on EOF. This caused errors when assertions were enabled, but otherwise did not seem to have affected the CRC-32 checking.
- Fixed: BasicZipOutputStream.write(int) wasn't overridden in ZipOutputStream.
- Fixed: BasicZipFile.length() threw NullPointerException instead of an IOException if the file was close()d before.
- Enhanced: The package now supports language encoding according to Appendix D of PKWARE's ZIP File Format Specification. This implies that the BasicZipFile class ignores its constructor parameter for the encoding if bit 11 of the General Purpose Bit Flag is set in an archive file.
- Enhanced: BasicZipFile now enumerates its entries in the order of the Central Directory Records.
- Enhanced: The property setters in the class ZipEntry now accept UNKNOWN (-1) as a value for numeric properties.
- Enhanced: The constructor of BasicZipFile now fails if it finds an unsupported compression method in a Central Directory Record instead of waiting until getInputStream() is called.