Package Bio :: Module bgzf
[hide private]
[frames] | no frames]

Source Code for Module Bio.bgzf

  1  #!/usr/bin/env python 
  2  # Copyright 2010-2015 by Peter Cock. 
  3  # All rights reserved. 
  4  # This code is part of the Biopython distribution and governed by its 
  5  # license.  Please see the LICENSE file that should have been included 
  6  # as part of this package. 
  7  r"""Read and write BGZF compressed files (the GZIP variant used in BAM). 
  8   
  9  The SAM/BAM file format (Sequence Alignment/Map) comes in a plain text 
 10  format (SAM), and a compressed binary format (BAM). The latter uses a 
 11  modified form of gzip compression called BGZF (Blocked GNU Zip Format), 
 12  which can be applied to any file format to provide compression with 
 13  efficient random access. BGZF is described together with the SAM/BAM 
 14  file format at http://samtools.sourceforge.net/SAM1.pdf 
 15   
 16  Please read the text below about 'virtual offsets' before using BGZF 
 17  files for random access. 
 18   
 19   
 20  Aim of this module 
 21  ------------------ 
 22   
 23  The Python gzip library can be used to read BGZF files, since for 
 24  decompression they are just (specialised) gzip files. What this 
 25  module aims to facilitate is random access to BGZF files (using the 
 26  'virtual offset' idea), and writing BGZF files (which means using 
 27  suitably sized gzip blocks and writing the extra 'BC' field in the 
 28  gzip headers). As in the gzip library, the zlib library is used 
 29  internally. 
 30   
 31  In addition to being required for random access to and writing of 
 32  BAM files, the BGZF format can also be used on other sequential 
 33  data (in the sense of one record after another), such as most of 
 34  the sequence data formats supported in Bio.SeqIO (like FASTA, 
 35  FASTQ, GenBank, etc) or large MAF alignments. 
 36   
 37  The Bio.SeqIO indexing functions use this module to support BGZF files. 
 38   
 39   
 40  Technical Introduction to BGZF 
 41  ------------------------------ 
 42   
 43  The gzip file format allows multiple compressed blocks, each of which 
 44  could be a stand alone gzip file. As an interesting bonus, this means 
 45  you can use Unix "cat" to combined to gzip files into one by 
 46  concatenating them. Also, each block can have one of several compression 
 47  levels (including uncompressed, which actually takes up a little bit 
 48  more space due to the gzip header). 
 49   
 50  What the BAM designers realised was that while random access to data 
 51  stored in traditional gzip files was slow, breaking the file into 
 52  gzip blocks would allow fast random access to each block. To access 
 53  a particular piece of the decompressed data, you just need to know 
 54  which block it starts in (the offset of the gzip block start), and 
 55  how far into the (decompressed) contents of the block you need to 
 56  read. 
 57   
 58  One problem with this is finding the gzip block sizes efficiently. 
 59  You can do it with a standard gzip file, but it requires every block 
 60  to be decompressed -- and that would be rather slow. Additionally 
 61  typical gzip files may use very large blocks. 
 62   
 63  All that differs in BGZF is that compressed size of each gzip block 
 64  is limited to 2^16 bytes, and an extra 'BC' field in the gzip header 
 65  records this size. Traditional decompression tools can ignore this, 
 66  and unzip the file just like any other gzip file. 
 67   
 68  The point of this is you can look at the first BGZF block, find out 
 69  how big it is from this 'BC' header, and thus seek immediately to 
 70  the second block, and so on. 
 71   
 72  The BAM indexing scheme records read positions using a 64 bit 
 73  'virtual offset', comprising coffset << 16 | uoffset, where coffset 
 74  is the file offset of the BGZF block containing the start of the read 
 75  (unsigned integer using up to 64-16 = 48 bits), and uoffset is the 
 76  offset within the (decompressed) block (unsigned 16 bit integer). 
 77   
 78  This limits you to BAM files where the last block starts by 2^48 
 79  bytes, or 256 petabytes, and the decompressed size of each block 
 80  is at most 2^16 bytes, or 64kb. Note that this matches the BGZF 
 81  'BC' field size which limits the compressed size of each block to 
 82  2^16 bytes, allowing for BAM files to use BGZF with no gzip 
 83  compression (useful for intermediate files in memory to reduced 
 84  CPU load). 
 85   
 86   
 87  Warning about namespaces 
 88  ------------------------ 
 89   
 90  It is considered a bad idea to use "from XXX import ``*``" in Python, because 
 91  it pollutes the namespace. This is a real issue with Bio.bgzf (and the 
 92  standard Python library gzip) because they contain a function called open 
 93  i.e. Suppose you do this: 
 94   
 95  >>> from Bio.bgzf import * 
 96  >>> print(open.__module__) 
 97  Bio.bgzf 
 98   
 99  Or, 
100   
101  >>> from gzip import * 
102  >>> print(open.__module__) 
103  gzip 
104   
105  Notice that the open function has been replaced. You can "fix" this if you 
106  need to by importing the built-in open function: 
107   
108  >>> try: 
109  ...     from __builtin__ import open # Python 2 
110  ... except ImportError: 
111  ...     from builtins import open # Python 3 
112  ... 
113   
114  However, what we recommend instead is to use the explicit namespace, e.g. 
115   
116  >>> from Bio import bgzf 
117  >>> print(bgzf.open.__module__) 
118  Bio.bgzf 
119   
120   
121  Example 
122  ------- 
123   
124  This is an ordinary GenBank file compressed using BGZF, so it can 
125  be decompressed using gzip, 
126   
127  >>> import gzip 
128  >>> handle = gzip.open("GenBank/NC_000932.gb.bgz", "r") 
129  >>> assert 0 == handle.tell() 
130  >>> line = handle.readline() 
131  >>> assert 80 == handle.tell() 
132  >>> line = handle.readline() 
133  >>> assert 143 == handle.tell() 
134  >>> data = handle.read(70000) 
135  >>> assert 70143 == handle.tell() 
136  >>> handle.close() 
137   
138  We can also access the file using the BGZF reader - but pay 
139  attention to the file offsets which will be explained below: 
140   
141  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz", "r") 
142  >>> assert 0 == handle.tell() 
143  >>> print(handle.readline().rstrip()) 
144  LOCUS       NC_000932             154478 bp    DNA     circular PLN 15-APR-2009 
145  >>> assert 80 == handle.tell() 
146  >>> print(handle.readline().rstrip()) 
147  DEFINITION  Arabidopsis thaliana chloroplast, complete genome. 
148  >>> assert 143 == handle.tell() 
149  >>> data = handle.read(70000) 
150  >>> assert 987828735 == handle.tell() 
151  >>> print(handle.readline().rstrip()) 
152  f="GeneID:844718" 
153  >>> print(handle.readline().rstrip()) 
154       CDS             complement(join(84337..84771,85454..85843)) 
155  >>> offset = handle.seek(make_virtual_offset(55074, 126)) 
156  >>> print(handle.readline().rstrip()) 
157      68521 tatgtcattc gaaattgtat aaagacaact cctatttaat agagctattt gtgcaagtat 
158  >>> handle.close() 
159   
160  Notice the handle's offset looks different as a BGZF file. This 
161  brings us to the key point about BGZF, which is the block structure: 
162   
163  >>> handle = open("GenBank/NC_000932.gb.bgz", "rb") 
164  >>> for values in BgzfBlocks(handle): 
165  ...     print("Raw start %i, raw length %i; data start %i, data length %i" % values) 
166  Raw start 0, raw length 15073; data start 0, data length 65536 
167  Raw start 15073, raw length 17857; data start 65536, data length 65536 
168  Raw start 32930, raw length 22144; data start 131072, data length 65536 
169  Raw start 55074, raw length 22230; data start 196608, data length 65536 
170  Raw start 77304, raw length 14939; data start 262144, data length 43478 
171  Raw start 92243, raw length 28; data start 305622, data length 0 
172  >>> handle.close() 
173   
174  In this example the first three blocks are 'full' and hold 65536 bytes 
175  of uncompressed data. The fourth block isn't full and holds 43478 bytes. 
176  Finally there is a special empty fifth block which takes 28 bytes on 
177  disk and serves as an 'end of file' (EOF) marker. If this is missing, 
178  it is possible your BGZF file is incomplete. 
179   
180  By reading ahead 70,000 bytes we moved into the second BGZF block, 
181  and at that point the BGZF virtual offsets start to look different 
182  to a simple offset into the decompressed data as exposed by the gzip 
183  library. 
184   
185  As an example, consider seeking to the decompressed position 196734. 
186  Since 196734 = 65536 + 65536 + 65536 + 126 = 65536*3 + 126, this 
187  is equivalent to jumping the first three blocks (which in this 
188  specific example are all size 65536 after decompression - which 
189  does not always hold) and starting at byte 126 of the fourth block 
190  (after decompression). For BGZF, we need to know the fourth block's 
191  offset of 55074 and the offset within the block of 126 to get the 
192  BGZF virtual offset. 
193   
194  >>> print(55074 << 16 | 126) 
195  3609329790 
196  >>> print(bgzf.make_virtual_offset(55074, 126)) 
197  3609329790 
198   
199  Thus for this BGZF file, decompressed position 196734 corresponds 
200  to the virtual offset 3609329790. However, another BGZF file with 
201  different contents would have compressed more or less efficiently, 
202  so the compressed blocks would be different sizes. What this means 
203  is the mapping between the uncompressed offset and the compressed 
204  virtual offset depends on the BGZF file you are using. 
205   
206  If you are accessing a BGZF file via this module, just use the 
207  handle.tell() method to note the virtual offset of a position you 
208  may later want to return to using handle.seek(). 
209   
210  The catch with BGZF virtual offsets is while they can be compared 
211  (which offset comes first in the file), you cannot safely subtract 
212  them to get the size of the data between them, nor add/subtract 
213  a relative offset. 
214   
215  Of course you can parse this file with Bio.SeqIO using BgzfReader, 
216  although there isn't any benefit over using gzip.open(...), unless 
217  you want to index BGZF compressed sequence files: 
218   
219  >>> from Bio import SeqIO 
220  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz") 
221  >>> record = SeqIO.read(handle, "genbank") 
222  >>> handle.close() 
223  >>> print(record.id) 
224  NC_000932.1 
225   
226  """ 
227   
228  from __future__ import print_function 
229   
230  import sys 
231  import zlib 
232  import struct 
233   
234  from Bio._py3k import _as_bytes, _as_string 
235  from Bio._py3k import open as _open 
236   
237   
238  # For Python 2 can just use: _bgzf_magic = '\x1f\x8b\x08\x04' 
239  # but need to use bytes on Python 3 
240  _bgzf_magic = b"\x1f\x8b\x08\x04" 
241  _bgzf_header = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00\x42\x43\x02\x00" 
242  _bgzf_eof = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00BC\x02\x00\x1b\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00" 
243  _bytes_BC = b"BC" 
244   
245   
246 -def open(filename, mode="rb"):
247 """Open a BGZF file for reading, writing or appending.""" 248 if "r" in mode.lower(): 249 return BgzfReader(filename, mode) 250 elif "w" in mode.lower() or "a" in mode.lower(): 251 return BgzfWriter(filename, mode) 252 else: 253 raise ValueError("Bad mode %r" % mode)
254 255
256 -def make_virtual_offset(block_start_offset, within_block_offset):
257 """Compute a BGZF virtual offset from block start and within block offsets. 258 259 The BAM indexing scheme records read positions using a 64 bit 260 'virtual offset', comprising in C terms: 261 262 block_start_offset << 16 | within_block_offset 263 264 Here block_start_offset is the file offset of the BGZF block 265 start (unsigned integer using up to 64-16 = 48 bits), and 266 within_block_offset within the (decompressed) block (unsigned 267 16 bit integer). 268 269 >>> make_virtual_offset(0, 0) 270 0 271 >>> make_virtual_offset(0, 1) 272 1 273 >>> make_virtual_offset(0, 2**16 - 1) 274 65535 275 >>> make_virtual_offset(0, 2**16) 276 Traceback (most recent call last): 277 ... 278 ValueError: Require 0 <= within_block_offset < 2**16, got 65536 279 280 >>> 65536 == make_virtual_offset(1, 0) 281 True 282 >>> 65537 == make_virtual_offset(1, 1) 283 True 284 >>> 131071 == make_virtual_offset(1, 2**16 - 1) 285 True 286 287 >>> 6553600000 == make_virtual_offset(100000, 0) 288 True 289 >>> 6553600001 == make_virtual_offset(100000, 1) 290 True 291 >>> 6553600010 == make_virtual_offset(100000, 10) 292 True 293 294 >>> make_virtual_offset(2**48, 0) 295 Traceback (most recent call last): 296 ... 297 ValueError: Require 0 <= block_start_offset < 2**48, got 281474976710656 298 299 """ 300 if within_block_offset < 0 or within_block_offset >= 65536: 301 raise ValueError("Require 0 <= within_block_offset < 2**16, got %i" % 302 within_block_offset) 303 if block_start_offset < 0 or block_start_offset >= 281474976710656: 304 raise ValueError("Require 0 <= block_start_offset < 2**48, got %i" % 305 block_start_offset) 306 return (block_start_offset << 16) | within_block_offset
307 308
309 -def split_virtual_offset(virtual_offset):
310 """Divides a 64-bit BGZF virtual offset into block start & within block offsets. 311 312 >>> (100000, 0) == split_virtual_offset(6553600000) 313 True 314 >>> (100000, 10) == split_virtual_offset(6553600010) 315 True 316 317 """ 318 start = virtual_offset >> 16 319 return start, virtual_offset ^ (start << 16)
320 321
322 -def BgzfBlocks(handle):
323 """Low level debugging function to inspect BGZF blocks. 324 325 Expects a BGZF compressed file opened in binary read mode using 326 the builtin open function. Do not use a handle from this bgzf 327 module or the gzip module's open function which will decompress 328 the file. 329 330 Returns the block start offset (see virtual offsets), the block 331 length (add these for the start of the next block), and the 332 decompressed length of the blocks contents (limited to 65536 in 333 BGZF), as an iterator - one tuple per BGZF block. 334 335 >>> try: 336 ... from __builtin__ import open # Python 2 337 ... except ImportError: 338 ... from builtins import open # Python 3 339 ... 340 >>> handle = open("SamBam/ex1.bam", "rb") 341 >>> for values in BgzfBlocks(handle): 342 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 343 Raw start 0, raw length 18239; data start 0, data length 65536 344 Raw start 18239, raw length 18223; data start 65536, data length 65536 345 Raw start 36462, raw length 18017; data start 131072, data length 65536 346 Raw start 54479, raw length 17342; data start 196608, data length 65536 347 Raw start 71821, raw length 17715; data start 262144, data length 65536 348 Raw start 89536, raw length 17728; data start 327680, data length 65536 349 Raw start 107264, raw length 17292; data start 393216, data length 63398 350 Raw start 124556, raw length 28; data start 456614, data length 0 351 >>> handle.close() 352 353 Indirectly we can tell this file came from an old version of 354 samtools because all the blocks (except the final one and the 355 dummy empty EOF marker block) are 65536 bytes. Later versions 356 avoid splitting a read between two blocks, and give the header 357 its own block (useful to speed up replacing the header). You 358 can see this in ex1_refresh.bam created using samtools 0.1.18: 359 360 samtools view -b ex1.bam > ex1_refresh.bam 361 362 >>> handle = open("SamBam/ex1_refresh.bam", "rb") 363 >>> for values in BgzfBlocks(handle): 364 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 365 Raw start 0, raw length 53; data start 0, data length 38 366 Raw start 53, raw length 18195; data start 38, data length 65434 367 Raw start 18248, raw length 18190; data start 65472, data length 65409 368 Raw start 36438, raw length 18004; data start 130881, data length 65483 369 Raw start 54442, raw length 17353; data start 196364, data length 65519 370 Raw start 71795, raw length 17708; data start 261883, data length 65411 371 Raw start 89503, raw length 17709; data start 327294, data length 65466 372 Raw start 107212, raw length 17390; data start 392760, data length 63854 373 Raw start 124602, raw length 28; data start 456614, data length 0 374 >>> handle.close() 375 376 The above example has no embedded SAM header (thus the first block 377 is very small at just 38 bytes of decompressed data), while the next 378 example does (a larger block of 103 bytes). Notice that the rest of 379 the blocks show the same sizes (they contain the same read data): 380 381 >>> handle = open("SamBam/ex1_header.bam", "rb") 382 >>> for values in BgzfBlocks(handle): 383 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 384 Raw start 0, raw length 104; data start 0, data length 103 385 Raw start 104, raw length 18195; data start 103, data length 65434 386 Raw start 18299, raw length 18190; data start 65537, data length 65409 387 Raw start 36489, raw length 18004; data start 130946, data length 65483 388 Raw start 54493, raw length 17353; data start 196429, data length 65519 389 Raw start 71846, raw length 17708; data start 261948, data length 65411 390 Raw start 89554, raw length 17709; data start 327359, data length 65466 391 Raw start 107263, raw length 17390; data start 392825, data length 63854 392 Raw start 124653, raw length 28; data start 456679, data length 0 393 >>> handle.close() 394 395 """ 396 data_start = 0 397 while True: 398 start_offset = handle.tell() 399 # This may raise StopIteration which is perfect here 400 block_length, data = _load_bgzf_block(handle) 401 data_len = len(data) 402 yield start_offset, block_length, data_start, data_len 403 data_start += data_len
404 405
406 -def _load_bgzf_block(handle, text_mode=False):
407 """Internal function to load the next BGZF function (PRIVATE).""" 408 magic = handle.read(4) 409 if not magic: 410 # End of file 411 raise StopIteration 412 if magic != _bgzf_magic: 413 raise ValueError(r"A BGZF (e.g. a BAM file) block should start with " 414 r"%r, not %r; handle.tell() now says %r" 415 % (_bgzf_magic, magic, handle.tell())) 416 gzip_mod_time, gzip_extra_flags, gzip_os, extra_len = \ 417 struct.unpack("<LBBH", handle.read(8)) 418 419 block_size = None 420 x_len = 0 421 while x_len < extra_len: 422 subfield_id = handle.read(2) 423 subfield_len = struct.unpack("<H", handle.read(2))[0] # uint16_t 424 subfield_data = handle.read(subfield_len) 425 x_len += subfield_len + 4 426 if subfield_id == _bytes_BC: 427 assert subfield_len == 2, "Wrong BC payload length" 428 assert block_size is None, "Two BC subfields?" 429 block_size = struct.unpack("<H", subfield_data)[0] + 1 # uint16_t 430 assert x_len == extra_len, (x_len, extra_len) 431 assert block_size is not None, "Missing BC, this isn't a BGZF file!" 432 # Now comes the compressed data, CRC, and length of uncompressed data. 433 deflate_size = block_size - 1 - extra_len - 19 434 d = zlib.decompressobj(-15) # Negative window size means no headers 435 data = d.decompress(handle.read(deflate_size)) + d.flush() 436 expected_crc = handle.read(4) 437 expected_size = struct.unpack("<I", handle.read(4))[0] 438 assert expected_size == len(data), \ 439 "Decompressed to %i, not %i" % (len(data), expected_size) 440 # Should cope with a mix of Python platforms... 441 crc = zlib.crc32(data) 442 if crc < 0: 443 crc = struct.pack("<i", crc) 444 else: 445 crc = struct.pack("<I", crc) 446 assert expected_crc == crc, \ 447 "CRC is %s, not %s" % (crc, expected_crc) 448 if text_mode: 449 return block_size, _as_string(data) 450 else: 451 return block_size, data
452 453
454 -class BgzfReader(object):
455 r"""BGZF reader, acts like a read only handle but seek/tell differ. 456 457 Let's use the BgzfBlocks function to have a peak at the BGZF blocks 458 in an example BAM file, 459 460 >>> try: 461 ... from __builtin__ import open # Python 2 462 ... except ImportError: 463 ... from builtins import open # Python 3 464 ... 465 >>> handle = open("SamBam/ex1.bam", "rb") 466 >>> for values in BgzfBlocks(handle): 467 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 468 Raw start 0, raw length 18239; data start 0, data length 65536 469 Raw start 18239, raw length 18223; data start 65536, data length 65536 470 Raw start 36462, raw length 18017; data start 131072, data length 65536 471 Raw start 54479, raw length 17342; data start 196608, data length 65536 472 Raw start 71821, raw length 17715; data start 262144, data length 65536 473 Raw start 89536, raw length 17728; data start 327680, data length 65536 474 Raw start 107264, raw length 17292; data start 393216, data length 63398 475 Raw start 124556, raw length 28; data start 456614, data length 0 476 >>> handle.close() 477 478 Now let's see how to use this block information to jump to 479 specific parts of the decompressed BAM file: 480 481 >>> handle = BgzfReader("SamBam/ex1.bam", "rb") 482 >>> assert 0 == handle.tell() 483 >>> magic = handle.read(4) 484 >>> assert 4 == handle.tell() 485 486 So far nothing so strange, we got the magic marker used at the 487 start of a decompressed BAM file, and the handle position makes 488 sense. Now however, let's jump to the end of this block and 4 489 bytes into the next block by reading 65536 bytes, 490 491 >>> data = handle.read(65536) 492 >>> len(data) 493 65536 494 >>> assert 1195311108 == handle.tell() 495 496 Expecting 4 + 65536 = 65540 were you? Well this is a BGZF 64-bit 497 virtual offset, which means: 498 499 >>> split_virtual_offset(1195311108) 500 (18239, 4) 501 502 You should spot 18239 as the start of the second BGZF block, while 503 the 4 is the offset into this block. See also make_virtual_offset, 504 505 >>> make_virtual_offset(18239, 4) 506 1195311108 507 508 Let's jump back to almost the start of the file, 509 510 >>> make_virtual_offset(0, 2) 511 2 512 >>> handle.seek(2) 513 2 514 >>> handle.close() 515 516 Note that you can use the max_cache argument to limit the number of 517 BGZF blocks cached in memory. The default is 100, and since each 518 block can be up to 64kb, the default cache could take up to 6MB of 519 RAM. The cache is not important for reading through the file in one 520 pass, but is important for improving performance of random access. 521 """ 522
523 - def __init__(self, filename=None, mode="r", fileobj=None, max_cache=100):
524 # TODO - Assuming we can seek, check for 28 bytes EOF empty block 525 # and if missing warn about possible truncation (as in samtools)? 526 if max_cache < 1: 527 raise ValueError("Use max_cache with a minimum of 1") 528 # Must open the BGZF file in binary mode, but we may want to 529 # treat the contents as either text or binary (unicode or 530 # bytes under Python 3) 531 if fileobj: 532 assert filename is None 533 handle = fileobj 534 assert "b" in handle.mode.lower() 535 else: 536 if "w" in mode.lower() or "a" in mode.lower(): 537 raise ValueError("Must use read mode (default), not write or append mode") 538 handle = _open(filename, "rb") 539 self._text = "b" not in mode.lower() 540 if self._text: 541 self._newline = "\n" 542 else: 543 self._newline = b"\n" 544 self._handle = handle 545 self.max_cache = max_cache 546 self._buffers = {} 547 self._block_start_offset = None 548 self._block_raw_length = None 549 self._load_block(handle.tell())
550
551 - def _load_block(self, start_offset=None):
552 if start_offset is None: 553 # If the file is being read sequentially, then _handle.tell() 554 # should be pointing at the start of the next block. 555 # However, if seek has been used, we can't assume that. 556 start_offset = self._block_start_offset + self._block_raw_length 557 if start_offset == self._block_start_offset: 558 self._within_block_offset = 0 559 return 560 elif start_offset in self._buffers: 561 # Already in cache 562 self._buffer, self._block_raw_length = self._buffers[start_offset] 563 self._within_block_offset = 0 564 self._block_start_offset = start_offset 565 return 566 # Must hit the disk... first check cache limits, 567 while len(self._buffers) >= self.max_cache: 568 # TODO - Implemente LRU cache removal? 569 self._buffers.popitem() 570 # Now load the block 571 handle = self._handle 572 if start_offset is not None: 573 handle.seek(start_offset) 574 self._block_start_offset = handle.tell() 575 try: 576 block_size, self._buffer = _load_bgzf_block(handle, self._text) 577 except StopIteration: 578 # EOF 579 block_size = 0 580 if self._text: 581 self._buffer = "" 582 else: 583 self._buffer = b"" 584 self._within_block_offset = 0 585 self._block_raw_length = block_size 586 # Finally save the block in our cache, 587 self._buffers[self._block_start_offset] = self._buffer, block_size
588
589 - def tell(self):
590 """Returns a 64-bit unsigned BGZF virtual offset.""" 591 if 0 < self._within_block_offset and \ 592 self._within_block_offset == len(self._buffer): 593 # Special case where we're right at the end of a (non empty) block. 594 # For non-maximal blocks could give two possible virtual offsets, 595 # but for a maximal block can't use 65536 as the within block 596 # offset. Therefore for consistency, use the next block and a 597 # within block offset of zero. 598 return (self._block_start_offset + self._block_raw_length) << 16 599 else: 600 # return make_virtual_offset(self._block_start_offset, 601 # self._within_block_offset) 602 # TODO - Include bounds checking as in make_virtual_offset? 603 return (self._block_start_offset << 16) | self._within_block_offset
604
605 - def seek(self, virtual_offset):
606 """Seek to a 64-bit unsigned BGZF virtual offset.""" 607 # Do this inline to avoid a function call, 608 # start_offset, within_block = split_virtual_offset(virtual_offset) 609 start_offset = virtual_offset >> 16 610 within_block = virtual_offset ^ (start_offset << 16) 611 if start_offset != self._block_start_offset: 612 # Don't need to load the block if already there 613 # (this avoids a function call since _load_block would do nothing) 614 self._load_block(start_offset) 615 assert start_offset == self._block_start_offset 616 if within_block > len(self._buffer): 617 if not (within_block == 0 and len(self._buffer) == 0): 618 raise ValueError("Within offset %i but block size only %i" 619 % (within_block, len(self._buffer))) 620 self._within_block_offset = within_block 621 # assert virtual_offset == self.tell(), \ 622 # "Did seek to %i (%i, %i), but tell says %i (%i, %i)" \ 623 # % (virtual_offset, start_offset, within_block, 624 # self.tell(), self._block_start_offset, 625 # self._within_block_offset) 626 return virtual_offset
627
628 - def read(self, size=-1):
629 if size < 0: 630 raise NotImplementedError("Don't be greedy, that could be massive!") 631 elif size == 0: 632 if self._text: 633 return "" 634 else: 635 return b"" 636 elif self._within_block_offset + size <= len(self._buffer): 637 # This may leave us right at the end of a block 638 # (lazy loading, don't load the next block unless we have too) 639 data = self._buffer[self._within_block_offset:self._within_block_offset + size] 640 self._within_block_offset += size 641 assert data # Must be at least 1 byte 642 return data 643 else: 644 data = self._buffer[self._within_block_offset:] 645 size -= len(data) 646 self._load_block() # will reset offsets 647 # TODO - Test with corner case of an empty block followed by 648 # a non-empty block 649 if not self._buffer: 650 return data # EOF 651 elif size: 652 # TODO - Avoid recursion 653 return data + self.read(size) 654 else: 655 # Only needed the end of the last block 656 return data
657
658 - def readline(self):
659 i = self._buffer.find(self._newline, self._within_block_offset) 660 # Three cases to consider, 661 if i == -1: 662 # No newline, need to read in more data 663 data = self._buffer[self._within_block_offset:] 664 self._load_block() # will reset offsets 665 if not self._buffer: 666 return data # EOF 667 else: 668 # TODO - Avoid recursion 669 return data + self.readline() 670 elif i + 1 == len(self._buffer): 671 # Found new line, but right at end of block (SPECIAL) 672 data = self._buffer[self._within_block_offset:] 673 # Must now load the next block to ensure tell() works 674 self._load_block() # will reset offsets 675 assert data 676 return data 677 else: 678 # Found new line, not at end of block (easy case, no IO) 679 data = self._buffer[self._within_block_offset:i + 1] 680 self._within_block_offset = i + 1 681 # assert data.endswith(self._newline) 682 return data
683
684 - def __next__(self):
685 line = self.readline() 686 if not line: 687 raise StopIteration 688 return line
689 690 if sys.version_info[0] < 3:
691 - def next(self):
692 """Python 2 style alias for Python 3 style __next__ method.""" 693 return self.__next__()
694
695 - def __iter__(self):
696 return self
697
698 - def close(self):
699 self._handle.close() 700 self._buffer = None 701 self._block_start_offset = None 702 self._buffers = None
703
704 - def seekable(self):
705 return True
706
707 - def isatty(self):
708 return False
709
710 - def fileno(self):
711 return self._handle.fileno()
712
713 - def __enter__(self):
714 return self
715
716 - def __exit__(self, type, value, traceback):
717 self.close()
718 719
720 -class BgzfWriter(object):
721
722 - def __init__(self, filename=None, mode="w", fileobj=None, compresslevel=6):
723 if fileobj: 724 assert filename is None 725 handle = fileobj 726 else: 727 if "w" not in mode.lower() and "a" not in mode.lower(): 728 raise ValueError("Must use write or append mode, not %r" % mode) 729 if "a" in mode.lower(): 730 handle = _open(filename, "ab") 731 else: 732 handle = _open(filename, "wb") 733 self._text = "b" not in mode.lower() 734 self._handle = handle 735 self._buffer = b"" 736 self.compresslevel = compresslevel
737
738 - def _write_block(self, block):
739 # print("Saving %i bytes" % len(block)) 740 start_offset = self._handle.tell() 741 assert len(block) <= 65536 742 # Giving a negative window bits means no gzip/zlib headers, 743 # -15 used in samtools 744 c = zlib.compressobj(self.compresslevel, 745 zlib.DEFLATED, 746 -15, 747 zlib.DEF_MEM_LEVEL, 748 0) 749 compressed = c.compress(block) + c.flush() 750 del c 751 assert len(compressed) < 65536, \ 752 "TODO - Didn't compress enough, try less data in this block" 753 crc = zlib.crc32(block) 754 # Should cope with a mix of Python platforms... 755 if crc < 0: 756 crc = struct.pack("<i", crc) 757 else: 758 crc = struct.pack("<I", crc) 759 bsize = struct.pack("<H", len(compressed) + 25) # includes -1 760 crc = struct.pack("<I", zlib.crc32(block) & 0xffffffff) 761 uncompressed_length = struct.pack("<I", len(block)) 762 # Fixed 16 bytes, 763 # gzip magic bytes (4) mod time (4), 764 # gzip flag (1), os (1), extra length which is six (2), 765 # sub field which is BC (2), sub field length of two (2), 766 # Variable data, 767 # 2 bytes: block length as BC sub field (2) 768 # X bytes: the data 769 # 8 bytes: crc (4), uncompressed data length (4) 770 data = _bgzf_header + bsize + compressed + crc + uncompressed_length 771 self._handle.write(data)
772
773 - def write(self, data):
774 # TODO - Check bytes vs unicode 775 data = _as_bytes(data) 776 # block_size = 2**16 = 65536 777 data_len = len(data) 778 if len(self._buffer) + data_len < 65536: 779 # print("Cached %r" % data) 780 self._buffer += data 781 return 782 else: 783 # print("Got %r, writing out some data..." % data) 784 self._buffer += data 785 while len(self._buffer) >= 65536: 786 self._write_block(self._buffer[:65536]) 787 self._buffer = self._buffer[65536:]
788
789 - def flush(self):
790 while len(self._buffer) >= 65536: 791 self._write_block(self._buffer[:65535]) 792 self._buffer = self._buffer[65535:] 793 self._write_block(self._buffer) 794 self._buffer = b"" 795 self._handle.flush()
796
797 - def close(self):
798 """Flush data, write 28 bytes BGZF EOF marker, and close BGZF file. 799 800 samtools will look for a magic EOF marker, just a 28 byte empty BGZF 801 block, and if it is missing warns the BAM file may be truncated. In 802 addition to samtools writing this block, so too does bgzip - so this 803 implementation does too. 804 """ 805 if self._buffer: 806 self.flush() 807 self._handle.write(_bgzf_eof) 808 self._handle.flush() 809 self._handle.close()
810
811 - def tell(self):
812 """Returns a BGZF 64-bit virtual offset.""" 813 return make_virtual_offset(self._handle.tell(), len(self._buffer))
814
815 - def seekable(self):
816 # Not seekable, but we do support tell... 817 return False
818
819 - def isatty(self):
820 return False
821
822 - def fileno(self):
823 return self._handle.fileno()
824
825 - def __enter__(self):
826 return self
827
828 - def __exit__(self, type, value, traceback):
829 self.close()
830 831 832 if __name__ == "__main__": 833 if len(sys.argv) > 1: 834 print("Call this with no arguments and pipe uncompressed data in on stdin") 835 print("and it will produce BGZF compressed data on stdout. e.g.") 836 print("") 837 print("./bgzf.py < example.fastq > example.fastq.bgz") 838 print("") 839 print("The extension convention of *.bgz is to distinugish these from *.gz") 840 print("used for standard gzipped files without the block structure of BGZF.") 841 print("You can use the standard gunzip command to decompress BGZF files,") 842 print("if it complains about the extension try something like this:") 843 print("") 844 print("cat example.fastq.bgz | gunzip > example.fastq") 845 print("") 846 print("See also the tool bgzip that comes with samtools") 847 sys.exit(0) 848 849 sys.stderr.write("Producing BGZF output from stdin...\n") 850 w = BgzfWriter(fileobj=sys.stdout) 851 while True: 852 data = sys.stdin.read(65536) 853 w.write(data) 854 if not data: 855 break 856 # Doing close with write an empty BGZF block as EOF marker: 857 w.close() 858 sys.stderr.write("BGZF data produced\n") 859