Package Bio :: Module bgzf
[hide private]
[frames] | no frames]

Source Code for Module Bio.bgzf

  1  #!/usr/bin/env python 
  2  # Copyright 2010-2015 by Peter Cock. 
  3  # All rights reserved. 
  4  # This code is part of the Biopython distribution and governed by its 
  5  # license.  Please see the LICENSE file that should have been included 
  6  # as part of this package. 
  7  r"""Read and write BGZF compressed files (the GZIP variant used in BAM). 
  8   
  9  The SAM/BAM file format (Sequence Alignment/Map) comes in a plain text 
 10  format (SAM), and a compressed binary format (BAM). The latter uses a 
 11  modified form of gzip compression called BGZF (Blocked GNU Zip Format), 
 12  which can be applied to any file format to provide compression with 
 13  efficient random access. BGZF is described together with the SAM/BAM 
 14  file format at http://samtools.sourceforge.net/SAM1.pdf 
 15   
 16  Please read the text below about 'virtual offsets' before using BGZF 
 17  files for random access. 
 18   
 19   
 20  Aim of this module 
 21  ------------------ 
 22   
 23  The Python gzip library can be used to read BGZF files, since for 
 24  decompression they are just (specialised) gzip files. What this 
 25  module aims to facilitate is random access to BGZF files (using the 
 26  'virtual offset' idea), and writing BGZF files (which means using 
 27  suitably sized gzip blocks and writing the extra 'BC' field in the 
 28  gzip headers). As in the gzip library, the zlib library is used 
 29  internally. 
 30   
 31  In addition to being required for random access to and writing of 
 32  BAM files, the BGZF format can also be used on other sequential 
 33  data (in the sense of one record after another), such as most of 
 34  the sequence data formats supported in Bio.SeqIO (like FASTA, 
 35  FASTQ, GenBank, etc) or large MAF alignments. 
 36   
 37  The Bio.SeqIO indexing functions use this module to support BGZF files. 
 38   
 39   
 40  Technical Introduction to BGZF 
 41  ------------------------------ 
 42   
 43  The gzip file format allows multiple compressed blocks, each of which 
 44  could be a stand alone gzip file. As an interesting bonus, this means 
 45  you can use Unix "cat" to combined to gzip files into one by 
 46  concatenating them. Also, each block can have one of several compression 
 47  levels (including uncompressed, which actually takes up a little bit 
 48  more space due to the gzip header). 
 49   
 50  What the BAM designers realised was that while random access to data 
 51  stored in traditional gzip files was slow, breaking the file into 
 52  gzip blocks would allow fast random access to each block. To access 
 53  a particular piece of the decompressed data, you just need to know 
 54  which block it starts in (the offset of the gzip block start), and 
 55  how far into the (decompressed) contents of the block you need to 
 56  read. 
 57   
 58  One problem with this is finding the gzip block sizes efficiently. 
 59  You can do it with a standard gzip file, but it requires every block 
 60  to be decompressed -- and that would be rather slow. Additionally 
 61  typical gzip files may use very large blocks. 
 62   
 63  All that differs in BGZF is that compressed size of each gzip block 
 64  is limited to 2^16 bytes, and an extra 'BC' field in the gzip header 
 65  records this size. Traditional decompression tools can ignore this, 
 66  and unzip the file just like any other gzip file. 
 67   
 68  The point of this is you can look at the first BGZF block, find out 
 69  how big it is from this 'BC' header, and thus seek immediately to 
 70  the second block, and so on. 
 71   
 72  The BAM indexing scheme records read positions using a 64 bit 
 73  'virtual offset', comprising coffset << 16 | uoffset, where coffset 
 74  is the file offset of the BGZF block containing the start of the read 
 75  (unsigned integer using up to 64-16 = 48 bits), and uoffset is the 
 76  offset within the (decompressed) block (unsigned 16 bit integer). 
 77   
 78  This limits you to BAM files where the last block starts by 2^48 
 79  bytes, or 256 petabytes, and the decompressed size of each block 
 80  is at most 2^16 bytes, or 64kb. Note that this matches the BGZF 
 81  'BC' field size which limits the compressed size of each block to 
 82  2^16 bytes, allowing for BAM files to use BGZF with no gzip 
 83  compression (useful for intermediate files in memory to reduced 
 84  CPU load). 
 85   
 86   
 87  Warning about namespaces 
 88  ------------------------ 
 89   
 90  It is considered a bad idea to use "from XXX import ``*``" in Python, because 
 91  it pollutes the namespace. This is a real issue with Bio.bgzf (and the 
 92  standard Python library gzip) because they contain a function called open 
 93  i.e. Suppose you do this: 
 94   
 95  >>> from Bio.bgzf import * 
 96  >>> print(open.__module__) 
 97  Bio.bgzf 
 98   
 99  Or, 
100   
101  >>> from gzip import * 
102  >>> print(open.__module__) 
103  gzip 
104   
105  Notice that the open function has been replaced. You can "fix" this if you 
106  need to by importing the built-in open function: 
107   
108  >>> try: 
109  ...     from __builtin__ import open # Python 2 
110  ... except ImportError: 
111  ...     from builtins import open # Python 3 
112  ... 
113   
114  However, what we recommend instead is to use the explicit namespace, e.g. 
115   
116  >>> from Bio import bgzf 
117  >>> print(bgzf.open.__module__) 
118  Bio.bgzf 
119   
120   
121  Example 
122  ------- 
123   
124  This is an ordinary GenBank file compressed using BGZF, so it can 
125  be decompressed using gzip, 
126   
127  >>> import gzip 
128  >>> handle = gzip.open("GenBank/NC_000932.gb.bgz", "r") 
129  >>> assert 0 == handle.tell() 
130  >>> line = handle.readline() 
131  >>> assert 80 == handle.tell() 
132  >>> line = handle.readline() 
133  >>> assert 143 == handle.tell() 
134  >>> data = handle.read(70000) 
135  >>> assert 70143 == handle.tell() 
136  >>> handle.close() 
137   
138  We can also access the file using the BGZF reader - but pay 
139  attention to the file offsets which will be explained below: 
140   
141  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz", "r") 
142  >>> assert 0 == handle.tell() 
143  >>> print(handle.readline().rstrip()) 
144  LOCUS       NC_000932             154478 bp    DNA     circular PLN 15-APR-2009 
145  >>> assert 80 == handle.tell() 
146  >>> print(handle.readline().rstrip()) 
147  DEFINITION  Arabidopsis thaliana chloroplast, complete genome. 
148  >>> assert 143 == handle.tell() 
149  >>> data = handle.read(70000) 
150  >>> assert 987828735 == handle.tell() 
151  >>> print(handle.readline().rstrip()) 
152  f="GeneID:844718" 
153  >>> print(handle.readline().rstrip()) 
154       CDS             complement(join(84337..84771,85454..85843)) 
155  >>> offset = handle.seek(make_virtual_offset(55074, 126)) 
156  >>> print(handle.readline().rstrip()) 
157      68521 tatgtcattc gaaattgtat aaagacaact cctatttaat agagctattt gtgcaagtat 
158  >>> handle.close() 
159   
160  Notice the handle's offset looks different as a BGZF file. This 
161  brings us to the key point about BGZF, which is the block structure: 
162   
163  >>> handle = open("GenBank/NC_000932.gb.bgz", "rb") 
164  >>> for values in BgzfBlocks(handle): 
165  ...     print("Raw start %i, raw length %i; data start %i, data length %i" % values) 
166  Raw start 0, raw length 15073; data start 0, data length 65536 
167  Raw start 15073, raw length 17857; data start 65536, data length 65536 
168  Raw start 32930, raw length 22144; data start 131072, data length 65536 
169  Raw start 55074, raw length 22230; data start 196608, data length 65536 
170  Raw start 77304, raw length 14939; data start 262144, data length 43478 
171  Raw start 92243, raw length 28; data start 305622, data length 0 
172  >>> handle.close() 
173   
174  In this example the first three blocks are 'full' and hold 65536 bytes 
175  of uncompressed data. The fourth block isn't full and holds 43478 bytes. 
176  Finally there is a special empty fifth block which takes 28 bytes on 
177  disk and serves as an 'end of file' (EOF) marker. If this is missing, 
178  it is possible your BGZF file is incomplete. 
179   
180  By reading ahead 70,000 bytes we moved into the second BGZF block, 
181  and at that point the BGZF virtual offsets start to look different 
182  to a simple offset into the decompressed data as exposed by the gzip 
183  library. 
184   
185  As an example, consider seeking to the decompressed position 196734. 
186  Since 196734 = 65536 + 65536 + 65536 + 126 = 65536*3 + 126, this 
187  is equivalent to jumping the first three blocks (which in this 
188  specific example are all size 65536 after decompression - which 
189  does not always hold) and starting at byte 126 of the fourth block 
190  (after decompression). For BGZF, we need to know the fourth block's 
191  offset of 55074 and the offset within the block of 126 to get the 
192  BGZF virtual offset. 
193   
194  >>> print(55074 << 16 | 126) 
195  3609329790 
196  >>> print(bgzf.make_virtual_offset(55074, 126)) 
197  3609329790 
198   
199  Thus for this BGZF file, decompressed position 196734 corresponds 
200  to the virtual offset 3609329790. However, another BGZF file with 
201  different contents would have compressed more or less efficiently, 
202  so the compressed blocks would be different sizes. What this means 
203  is the mapping between the uncompressed offset and the compressed 
204  virtual offset depends on the BGZF file you are using. 
205   
206  If you are accessing a BGZF file via this module, just use the 
207  handle.tell() method to note the virtual offset of a position you 
208  may later want to return to using handle.seek(). 
209   
210  The catch with BGZF virtual offsets is while they can be compared 
211  (which offset comes first in the file), you cannot safely subtract 
212  them to get the size of the data between them, nor add/subtract 
213  a relative offset. 
214   
215  Of course you can parse this file with Bio.SeqIO using BgzfReader, 
216  although there isn't any benefit over using gzip.open(...), unless 
217  you want to index BGZF compressed sequence files: 
218   
219  >>> from Bio import SeqIO 
220  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz") 
221  >>> record = SeqIO.read(handle, "genbank") 
222  >>> handle.close() 
223  >>> print(record.id) 
224  NC_000932.1 
225   
226  """ 
227   
228  from __future__ import print_function 
229   
230  import sys 
231  import zlib 
232  import struct 
233   
234  from Bio._py3k import _as_bytes, _as_string 
235  from Bio._py3k import open as _open 
236   
237   
238  # For Python 2 can just use: _bgzf_magic = '\x1f\x8b\x08\x04' 
239  # but need to use bytes on Python 3 
240  _bgzf_magic = b"\x1f\x8b\x08\x04" 
241  _bgzf_header = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00\x42\x43\x02\x00" 
242  _bgzf_eof = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00BC\x02\x00\x1b\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00" 
243  _bytes_BC = b"BC" 
244   
245   
246 -def open(filename, mode="rb"):
247 """Open a BGZF file for reading, writing or appending.""" 248 if "r" in mode.lower(): 249 return BgzfReader(filename, mode) 250 elif "w" in mode.lower() or "a" in mode.lower(): 251 return BgzfWriter(filename, mode) 252 else: 253 raise ValueError("Bad mode %r" % mode)
254 255
256 -def make_virtual_offset(block_start_offset, within_block_offset):
257 """Compute a BGZF virtual offset from block start and within block offsets. 258 259 The BAM indexing scheme records read positions using a 64 bit 260 'virtual offset', comprising in C terms: 261 262 block_start_offset << 16 | within_block_offset 263 264 Here block_start_offset is the file offset of the BGZF block 265 start (unsigned integer using up to 64-16 = 48 bits), and 266 within_block_offset within the (decompressed) block (unsigned 267 16 bit integer). 268 269 >>> make_virtual_offset(0, 0) 270 0 271 >>> make_virtual_offset(0, 1) 272 1 273 >>> make_virtual_offset(0, 2**16 - 1) 274 65535 275 >>> make_virtual_offset(0, 2**16) 276 Traceback (most recent call last): 277 ... 278 ValueError: Require 0 <= within_block_offset < 2**16, got 65536 279 280 >>> 65536 == make_virtual_offset(1, 0) 281 True 282 >>> 65537 == make_virtual_offset(1, 1) 283 True 284 >>> 131071 == make_virtual_offset(1, 2**16 - 1) 285 True 286 287 >>> 6553600000 == make_virtual_offset(100000, 0) 288 True 289 >>> 6553600001 == make_virtual_offset(100000, 1) 290 True 291 >>> 6553600010 == make_virtual_offset(100000, 10) 292 True 293 294 >>> make_virtual_offset(2**48, 0) 295 Traceback (most recent call last): 296 ... 297 ValueError: Require 0 <= block_start_offset < 2**48, got 281474976710656 298 299 """ 300 if within_block_offset < 0 or within_block_offset >= 65536: 301 raise ValueError("Require 0 <= within_block_offset < 2**16, got %i" % within_block_offset) 302 if block_start_offset < 0 or block_start_offset >= 281474976710656: 303 raise ValueError("Require 0 <= block_start_offset < 2**48, got %i" % block_start_offset) 304 return (block_start_offset << 16) | within_block_offset
305 306
307 -def split_virtual_offset(virtual_offset):
308 """Divides a 64-bit BGZF virtual offset into block start & within block offsets. 309 310 >>> (100000, 0) == split_virtual_offset(6553600000) 311 True 312 >>> (100000, 10) == split_virtual_offset(6553600010) 313 True 314 315 """ 316 start = virtual_offset >> 16 317 return start, virtual_offset ^ (start << 16)
318 319
320 -def BgzfBlocks(handle):
321 """Low level debugging function to inspect BGZF blocks. 322 323 Expects a BGZF compressed file opened in binary read mode using 324 the builtin open function. Do not use a handle from this bgzf 325 module or the gzip module's open function which will decompress 326 the file. 327 328 Returns the block start offset (see virtual offsets), the block 329 length (add these for the start of the next block), and the 330 decompressed length of the blocks contents (limited to 65536 in 331 BGZF), as an iterator - one tuple per BGZF block. 332 333 >>> try: 334 ... from __builtin__ import open # Python 2 335 ... except ImportError: 336 ... from builtins import open # Python 3 337 ... 338 >>> handle = open("SamBam/ex1.bam", "rb") 339 >>> for values in BgzfBlocks(handle): 340 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 341 Raw start 0, raw length 18239; data start 0, data length 65536 342 Raw start 18239, raw length 18223; data start 65536, data length 65536 343 Raw start 36462, raw length 18017; data start 131072, data length 65536 344 Raw start 54479, raw length 17342; data start 196608, data length 65536 345 Raw start 71821, raw length 17715; data start 262144, data length 65536 346 Raw start 89536, raw length 17728; data start 327680, data length 65536 347 Raw start 107264, raw length 17292; data start 393216, data length 63398 348 Raw start 124556, raw length 28; data start 456614, data length 0 349 >>> handle.close() 350 351 Indirectly we can tell this file came from an old version of 352 samtools because all the blocks (except the final one and the 353 dummy empty EOF marker block) are 65536 bytes. Later versions 354 avoid splitting a read between two blocks, and give the header 355 its own block (useful to speed up replacing the header). You 356 can see this in ex1_refresh.bam created using samtools 0.1.18: 357 358 samtools view -b ex1.bam > ex1_refresh.bam 359 360 >>> handle = open("SamBam/ex1_refresh.bam", "rb") 361 >>> for values in BgzfBlocks(handle): 362 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 363 Raw start 0, raw length 53; data start 0, data length 38 364 Raw start 53, raw length 18195; data start 38, data length 65434 365 Raw start 18248, raw length 18190; data start 65472, data length 65409 366 Raw start 36438, raw length 18004; data start 130881, data length 65483 367 Raw start 54442, raw length 17353; data start 196364, data length 65519 368 Raw start 71795, raw length 17708; data start 261883, data length 65411 369 Raw start 89503, raw length 17709; data start 327294, data length 65466 370 Raw start 107212, raw length 17390; data start 392760, data length 63854 371 Raw start 124602, raw length 28; data start 456614, data length 0 372 >>> handle.close() 373 374 The above example has no embedded SAM header (thus the first block 375 is very small at just 38 bytes of decompressed data), while the next 376 example does (a larger block of 103 bytes). Notice that the rest of 377 the blocks show the same sizes (they contain the same read data): 378 379 >>> handle = open("SamBam/ex1_header.bam", "rb") 380 >>> for values in BgzfBlocks(handle): 381 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 382 Raw start 0, raw length 104; data start 0, data length 103 383 Raw start 104, raw length 18195; data start 103, data length 65434 384 Raw start 18299, raw length 18190; data start 65537, data length 65409 385 Raw start 36489, raw length 18004; data start 130946, data length 65483 386 Raw start 54493, raw length 17353; data start 196429, data length 65519 387 Raw start 71846, raw length 17708; data start 261948, data length 65411 388 Raw start 89554, raw length 17709; data start 327359, data length 65466 389 Raw start 107263, raw length 17390; data start 392825, data length 63854 390 Raw start 124653, raw length 28; data start 456679, data length 0 391 >>> handle.close() 392 393 """ 394 data_start = 0 395 while True: 396 start_offset = handle.tell() 397 # This may raise StopIteration which is perfect here 398 block_length, data = _load_bgzf_block(handle) 399 data_len = len(data) 400 yield start_offset, block_length, data_start, data_len 401 data_start += data_len
402 403
404 -def _load_bgzf_block(handle, text_mode=False):
405 """Internal function to load the next BGZF function (PRIVATE).""" 406 magic = handle.read(4) 407 if not magic: 408 # End of file 409 raise StopIteration 410 if magic != _bgzf_magic: 411 raise ValueError(r"A BGZF (e.g. a BAM file) block should start with " 412 r"%r, not %r; handle.tell() now says %r" 413 % (_bgzf_magic, magic, handle.tell())) 414 gzip_mod_time, gzip_extra_flags, gzip_os, extra_len = \ 415 struct.unpack("<LBBH", handle.read(8)) 416 417 block_size = None 418 x_len = 0 419 while x_len < extra_len: 420 subfield_id = handle.read(2) 421 subfield_len = struct.unpack("<H", handle.read(2))[0] # uint16_t 422 subfield_data = handle.read(subfield_len) 423 x_len += subfield_len + 4 424 if subfield_id == _bytes_BC: 425 assert subfield_len == 2, "Wrong BC payload length" 426 assert block_size is None, "Two BC subfields?" 427 block_size = struct.unpack("<H", subfield_data)[0] + 1 # uint16_t 428 assert x_len == extra_len, (x_len, extra_len) 429 assert block_size is not None, "Missing BC, this isn't a BGZF file!" 430 # Now comes the compressed data, CRC, and length of uncompressed data. 431 deflate_size = block_size - 1 - extra_len - 19 432 d = zlib.decompressobj(-15) # Negative window size means no headers 433 data = d.decompress(handle.read(deflate_size)) + d.flush() 434 expected_crc = handle.read(4) 435 expected_size = struct.unpack("<I", handle.read(4))[0] 436 assert expected_size == len(data), \ 437 "Decompressed to %i, not %i" % (len(data), expected_size) 438 # Should cope with a mix of Python platforms... 439 crc = zlib.crc32(data) 440 if crc < 0: 441 crc = struct.pack("<i", crc) 442 else: 443 crc = struct.pack("<I", crc) 444 assert expected_crc == crc, \ 445 "CRC is %s, not %s" % (crc, expected_crc) 446 if text_mode: 447 return block_size, _as_string(data) 448 else: 449 return block_size, data
450 451
452 -class BgzfReader(object):
453 r"""BGZF reader, acts like a read only handle but seek/tell differ. 454 455 Let's use the BgzfBlocks function to have a peak at the BGZF blocks 456 in an example BAM file, 457 458 >>> try: 459 ... from __builtin__ import open # Python 2 460 ... except ImportError: 461 ... from builtins import open # Python 3 462 ... 463 >>> handle = open("SamBam/ex1.bam", "rb") 464 >>> for values in BgzfBlocks(handle): 465 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 466 Raw start 0, raw length 18239; data start 0, data length 65536 467 Raw start 18239, raw length 18223; data start 65536, data length 65536 468 Raw start 36462, raw length 18017; data start 131072, data length 65536 469 Raw start 54479, raw length 17342; data start 196608, data length 65536 470 Raw start 71821, raw length 17715; data start 262144, data length 65536 471 Raw start 89536, raw length 17728; data start 327680, data length 65536 472 Raw start 107264, raw length 17292; data start 393216, data length 63398 473 Raw start 124556, raw length 28; data start 456614, data length 0 474 >>> handle.close() 475 476 Now let's see how to use this block information to jump to 477 specific parts of the decompressed BAM file: 478 479 >>> handle = BgzfReader("SamBam/ex1.bam", "rb") 480 >>> assert 0 == handle.tell() 481 >>> magic = handle.read(4) 482 >>> assert 4 == handle.tell() 483 484 So far nothing so strange, we got the magic marker used at the 485 start of a decompressed BAM file, and the handle position makes 486 sense. Now however, let's jump to the end of this block and 4 487 bytes into the next block by reading 65536 bytes, 488 489 >>> data = handle.read(65536) 490 >>> len(data) 491 65536 492 >>> assert 1195311108 == handle.tell() 493 494 Expecting 4 + 65536 = 65540 were you? Well this is a BGZF 64-bit 495 virtual offset, which means: 496 497 >>> split_virtual_offset(1195311108) 498 (18239, 4) 499 500 You should spot 18239 as the start of the second BGZF block, while 501 the 4 is the offset into this block. See also make_virtual_offset, 502 503 >>> make_virtual_offset(18239, 4) 504 1195311108 505 506 Let's jump back to almost the start of the file, 507 508 >>> make_virtual_offset(0, 2) 509 2 510 >>> handle.seek(2) 511 2 512 >>> handle.close() 513 514 Note that you can use the max_cache argument to limit the number of 515 BGZF blocks cached in memory. The default is 100, and since each 516 block can be up to 64kb, the default cache could take up to 6MB of 517 RAM. The cache is not important for reading through the file in one 518 pass, but is important for improving performance of random access. 519 """ 520
521 - def __init__(self, filename=None, mode="r", fileobj=None, max_cache=100):
522 # TODO - Assuming we can seek, check for 28 bytes EOF empty block 523 # and if missing warn about possible truncation (as in samtools)? 524 if max_cache < 1: 525 raise ValueError("Use max_cache with a minimum of 1") 526 # Must open the BGZF file in binary mode, but we may want to 527 # treat the contents as either text or binary (unicode or 528 # bytes under Python 3) 529 if fileobj: 530 assert filename is None 531 handle = fileobj 532 assert "b" in handle.mode.lower() 533 else: 534 if "w" in mode.lower() \ 535 or "a" in mode.lower(): 536 raise ValueError("Must use read mode (default), not write or append mode") 537 handle = _open(filename, "rb") 538 self._text = "b" not in mode.lower() 539 if self._text: 540 self._newline = "\n" 541 else: 542 self._newline = b"\n" 543 self._handle = handle 544 self.max_cache = max_cache 545 self._buffers = {} 546 self._block_start_offset = None 547 self._block_raw_length = None 548 self._load_block(handle.tell())
549
550 - def _load_block(self, start_offset=None):
551 if start_offset is None: 552 # If the file is being read sequentially, then _handle.tell() 553 # should be pointing at the start of the next block. 554 # However, if seek has been used, we can't assume that. 555 start_offset = self._block_start_offset + self._block_raw_length 556 if start_offset == self._block_start_offset: 557 self._within_block_offset = 0 558 return 559 elif start_offset in self._buffers: 560 # Already in cache 561 self._buffer, self._block_raw_length = self._buffers[start_offset] 562 self._within_block_offset = 0 563 self._block_start_offset = start_offset 564 return 565 # Must hit the disk... first check cache limits, 566 while len(self._buffers) >= self.max_cache: 567 # TODO - Implemente LRU cache removal? 568 self._buffers.popitem() 569 # Now load the block 570 handle = self._handle 571 if start_offset is not None: 572 handle.seek(start_offset) 573 self._block_start_offset = handle.tell() 574 try: 575 block_size, self._buffer = _load_bgzf_block(handle, self._text) 576 except StopIteration: 577 # EOF 578 block_size = 0 579 if self._text: 580 self._buffer = "" 581 else: 582 self._buffer = b"" 583 self._within_block_offset = 0 584 self._block_raw_length = block_size 585 # Finally save the block in our cache, 586 self._buffers[self._block_start_offset] = self._buffer, block_size
587
588 - def tell(self):
589 """Returns a 64-bit unsigned BGZF virtual offset.""" 590 if 0 < self._within_block_offset and self._within_block_offset == len(self._buffer): 591 # Special case where we're right at the end of a (non empty) block. 592 # For non-maximal blocks could give two possible virtual offsets, 593 # but for a maximal block can't use 65536 as the within block 594 # offset. Therefore for consistency, use the next block and a 595 # within block offset of zero. 596 return (self._block_start_offset + self._block_raw_length) << 16 597 else: 598 # return make_virtual_offset(self._block_start_offset, 599 # self._within_block_offset) 600 # TODO - Include bounds checking as in make_virtual_offset? 601 return (self._block_start_offset << 16) | self._within_block_offset
602
603 - def seek(self, virtual_offset):
604 """Seek to a 64-bit unsigned BGZF virtual offset.""" 605 # Do this inline to avoid a function call, 606 # start_offset, within_block = split_virtual_offset(virtual_offset) 607 start_offset = virtual_offset >> 16 608 within_block = virtual_offset ^ (start_offset << 16) 609 if start_offset != self._block_start_offset: 610 # Don't need to load the block if already there 611 # (this avoids a function call since _load_block would do nothing) 612 self._load_block(start_offset) 613 assert start_offset == self._block_start_offset 614 if within_block > len(self._buffer) \ 615 and not (within_block == 0 and len(self._buffer) == 0): 616 raise ValueError("Within offset %i but block size only %i" 617 % (within_block, len(self._buffer))) 618 self._within_block_offset = within_block 619 # assert virtual_offset == self.tell(), \ 620 # "Did seek to %i (%i, %i), but tell says %i (%i, %i)" \ 621 # % (virtual_offset, start_offset, within_block, 622 # self.tell(), self._block_start_offset, self._within_block_offset) 623 return virtual_offset
624
625 - def read(self, size=-1):
626 if size < 0: 627 raise NotImplementedError("Don't be greedy, that could be massive!") 628 elif size == 0: 629 if self._text: 630 return "" 631 else: 632 return b"" 633 elif self._within_block_offset + size <= len(self._buffer): 634 # This may leave us right at the end of a block 635 # (lazy loading, don't load the next block unless we have too) 636 data = self._buffer[self._within_block_offset:self._within_block_offset + size] 637 self._within_block_offset += size 638 assert data # Must be at least 1 byte 639 return data 640 else: 641 data = self._buffer[self._within_block_offset:] 642 size -= len(data) 643 self._load_block() # will reset offsets 644 # TODO - Test with corner case of an empty block followed by 645 # a non-empty block 646 if not self._buffer: 647 return data # EOF 648 elif size: 649 # TODO - Avoid recursion 650 return data + self.read(size) 651 else: 652 # Only needed the end of the last block 653 return data
654
655 - def readline(self):
656 i = self._buffer.find(self._newline, self._within_block_offset) 657 # Three cases to consider, 658 if i == -1: 659 # No newline, need to read in more data 660 data = self._buffer[self._within_block_offset:] 661 self._load_block() # will reset offsets 662 if not self._buffer: 663 return data # EOF 664 else: 665 # TODO - Avoid recursion 666 return data + self.readline() 667 elif i + 1 == len(self._buffer): 668 # Found new line, but right at end of block (SPECIAL) 669 data = self._buffer[self._within_block_offset:] 670 # Must now load the next block to ensure tell() works 671 self._load_block() # will reset offsets 672 assert data 673 return data 674 else: 675 # Found new line, not at end of block (easy case, no IO) 676 data = self._buffer[self._within_block_offset:i + 1] 677 self._within_block_offset = i + 1 678 # assert data.endswith(self._newline) 679 return data
680
681 - def __next__(self):
682 line = self.readline() 683 if not line: 684 raise StopIteration 685 return line
686 687 if sys.version_info[0] < 3:
688 - def next(self):
689 """Python 2 style alias for Python 3 style __next__ method.""" 690 return self.__next__()
691
692 - def __iter__(self):
693 return self
694
695 - def close(self):
696 self._handle.close() 697 self._buffer = None 698 self._block_start_offset = None 699 self._buffers = None
700
701 - def seekable(self):
702 return True
703
704 - def isatty(self):
705 return False
706
707 - def fileno(self):
708 return self._handle.fileno()
709
710 - def __enter__(self):
711 return self
712
713 - def __exit__(self, type, value, traceback):
714 self.close()
715 716
717 -class BgzfWriter(object):
718
719 - def __init__(self, filename=None, mode="w", fileobj=None, compresslevel=6):
720 if fileobj: 721 assert filename is None 722 handle = fileobj 723 else: 724 if "w" not in mode.lower() \ 725 and "a" not in mode.lower(): 726 raise ValueError("Must use write or append mode, not %r" % mode) 727 if "a" in mode.lower(): 728 handle = _open(filename, "ab") 729 else: 730 handle = _open(filename, "wb") 731 self._text = "b" not in mode.lower() 732 self._handle = handle 733 self._buffer = b"" 734 self.compresslevel = compresslevel
735
736 - def _write_block(self, block):
737 # print("Saving %i bytes" % len(block)) 738 start_offset = self._handle.tell() 739 assert len(block) <= 65536 740 # Giving a negative window bits means no gzip/zlib headers, -15 used in samtools 741 c = zlib.compressobj(self.compresslevel, 742 zlib.DEFLATED, 743 -15, 744 zlib.DEF_MEM_LEVEL, 745 0) 746 compressed = c.compress(block) + c.flush() 747 del c 748 assert len(compressed) < 65536, "TODO - Didn't compress enough, try less data in this block" 749 crc = zlib.crc32(block) 750 # Should cope with a mix of Python platforms... 751 if crc < 0: 752 crc = struct.pack("<i", crc) 753 else: 754 crc = struct.pack("<I", crc) 755 bsize = struct.pack("<H", len(compressed) + 25) # includes -1 756 crc = struct.pack("<I", zlib.crc32(block) & 0xffffffff) 757 uncompressed_length = struct.pack("<I", len(block)) 758 # Fixed 16 bytes, 759 # gzip magic bytes (4) mod time (4), 760 # gzip flag (1), os (1), extra length which is six (2), 761 # sub field which is BC (2), sub field length of two (2), 762 # Variable data, 763 # 2 bytes: block length as BC sub field (2) 764 # X bytes: the data 765 # 8 bytes: crc (4), uncompressed data length (4) 766 data = _bgzf_header + bsize + compressed + crc + uncompressed_length 767 self._handle.write(data)
768
769 - def write(self, data):
770 # TODO - Check bytes vs unicode 771 data = _as_bytes(data) 772 # block_size = 2**16 = 65536 773 data_len = len(data) 774 if len(self._buffer) + data_len < 65536: 775 # print("Cached %r" % data) 776 self._buffer += data 777 return 778 else: 779 # print("Got %r, writing out some data..." % data) 780 self._buffer += data 781 while len(self._buffer) >= 65536: 782 self._write_block(self._buffer[:65536]) 783 self._buffer = self._buffer[65536:]
784
785 - def flush(self):
786 while len(self._buffer) >= 65536: 787 self._write_block(self._buffer[:65535]) 788 self._buffer = self._buffer[65535:] 789 self._write_block(self._buffer) 790 self._buffer = b"" 791 self._handle.flush()
792
793 - def close(self):
794 """Flush data, write 28 bytes empty BGZF EOF marker, and close the BGZF file.""" 795 if self._buffer: 796 self.flush() 797 # samtools will look for a magic EOF marker, just a 28 byte empty BGZF block, 798 # and if it is missing warns the BAM file may be truncated. In addition to 799 # samtools writing this block, so too does bgzip - so we should too. 800 self._handle.write(_bgzf_eof) 801 self._handle.flush() 802 self._handle.close()
803
804 - def tell(self):
805 """Returns a BGZF 64-bit virtual offset.""" 806 return make_virtual_offset(self._handle.tell(), len(self._buffer))
807
808 - def seekable(self):
809 # Not seekable, but we do support tell... 810 return False
811
812 - def isatty(self):
813 return False
814
815 - def fileno(self):
816 return self._handle.fileno()
817
818 - def __enter__(self):
819 return self
820
821 - def __exit__(self, type, value, traceback):
822 self.close()
823 824 825 if __name__ == "__main__": 826 if len(sys.argv) > 1: 827 print("Call this with no arguments and pipe uncompressed data in on stdin") 828 print("and it will produce BGZF compressed data on stdout. e.g.") 829 print("") 830 print("./bgzf.py < example.fastq > example.fastq.bgz") 831 print("") 832 print("The extension convention of *.bgz is to distinugish these from *.gz") 833 print("used for standard gzipped files without the block structure of BGZF.") 834 print("You can use the standard gunzip command to decompress BGZF files,") 835 print("if it complains about the extension try something like this:") 836 print("") 837 print("cat example.fastq.bgz | gunzip > example.fastq") 838 print("") 839 print("See also the tool bgzip that comes with samtools") 840 sys.exit(0) 841 842 sys.stderr.write("Producing BGZF output from stdin...\n") 843 w = BgzfWriter(fileobj=sys.stdout) 844 while True: 845 data = sys.stdin.read(65536) 846 w.write(data) 847 if not data: 848 break 849 # Doing close with write an empty BGZF block as EOF marker: 850 w.close() 851 sys.stderr.write("BGZF data produced\n") 852