Package Bio :: Module bgzf
[hide private]
[frames] | no frames]

Source Code for Module Bio.bgzf

  1  #!/usr/bin/env python 
  2  # Copyright 2010-2013 by Peter Cock. 
  3  # All rights reserved. 
  4  # This code is part of the Biopython distribution and governed by its 
  5  # license.  Please see the LICENSE file that should have been included 
  6  # as part of this package. 
  7  r"""Read and write BGZF compressed files (the GZIP variant used in BAM). 
  8   
  9  The SAM/BAM file format (Sequence Alignment/Map) comes in a plain text 
 10  format (SAM), and a compressed binary format (BAM). The latter uses a 
 11  modified form of gzip compression called BGZF (Blocked GNU Zip Format), 
 12  which can be applied to any file format to provide compression with 
 13  efficient random access. BGZF is described together with the SAM/BAM 
 14  file format at http://samtools.sourceforge.net/SAM1.pdf 
 15   
 16  Please read the text below about 'virtual offsets' before using BGZF 
 17  files for random access. 
 18   
 19   
 20  Aim of this module 
 21  ------------------ 
 22   
 23  The Python gzip library can be used to read BGZF files, since for 
 24  decompression they are just (specialised) gzip files. What this 
 25  module aims to facilitate is random access to BGZF files (using the 
 26  'virtual offset' idea), and writing BGZF files (which means using 
 27  suitably sized gzip blocks and writing the extra 'BC' field in the 
 28  gzip headers). As in the gzip library, the zlib library is used 
 29  internally. 
 30   
 31  In addition to being required for random access to and writing of 
 32  BAM files, the BGZF format can also be used on other sequential 
 33  data (in the sense of one record after another), such as most of 
 34  the sequence data formats supported in Bio.SeqIO (like FASTA, 
 35  FASTQ, GenBank, etc) or large MAF alignments. 
 36   
 37  The Bio.SeqIO indexing functions use this module to support BGZF files. 
 38   
 39   
 40  Technical Introduction to BGZF 
 41  ------------------------------ 
 42   
 43  The gzip file format allows multiple compressed blocks, each of which 
 44  could be a stand alone gzip file. As an interesting bonus, this means 
 45  you can use Unix "cat" to combined to gzip files into one by 
 46  concatenating them. Also, each block can have one of several compression 
 47  levels (including uncompressed, which actually takes up a little bit 
 48  more space due to the gzip header). 
 49   
 50  What the BAM designers realised was that while random access to data 
 51  stored in traditional gzip files was slow, breaking the file into 
 52  gzip blocks would allow fast random access to each block. To access 
 53  a particular piece of the decompressed data, you just need to know 
 54  which block it starts in (the offset of the gzip block start), and 
 55  how far into the (decompressed) contents of the block you need to 
 56  read. 
 57   
 58  One problem with this is finding the gzip block sizes efficiently. 
 59  You can do it with a standard gzip file, but it requires every block 
 60  to be decompressed -- and that would be rather slow. Additionally 
 61  typical gzip files may use very large blocks. 
 62   
 63  All that differs in BGZF is that compressed size of each gzip block 
 64  is limited to 2^16 bytes, and an extra 'BC' field in the gzip header 
 65  records this size. Traditional decompression tools can ignore this, 
 66  and unzip the file just like any other gzip file. 
 67   
 68  The point of this is you can look at the first BGZF block, find out 
 69  how big it is from this 'BC' header, and thus seek immediately to 
 70  the second block, and so on. 
 71   
 72  The BAM indexing scheme records read positions using a 64 bit 
 73  'virtual offset', comprising coffset<<16|uoffset, where coffset is 
 74  the file offset of the BGZF block containing the start of the read 
 75  (unsigned integer using up to 64-16 = 48 bits), and uoffset is the 
 76  offset within the (decompressed) block (unsigned 16 bit integer). 
 77   
 78  This limits you to BAM files where the last block starts by 2^48 
 79  bytes, or 256 petabytes, and the decompressed size of each block 
 80  is at most 2^16 bytes, or 64kb. Note that this matches the BGZF 
 81  'BC' field size which limits the compressed size of each block to 
 82  2^16 bytes, allowing for BAM files to use BGZF with no gzip 
 83  compression (useful for intermediate files in memory to reduced 
 84  CPU load). 
 85   
 86   
 87  Warning about namespaces 
 88  ------------------------ 
 89   
 90  It is considered a bad idea to use "from XXX import *" in Python, because 
 91  it pollutes the namespace. This is a real issue with Bio.bgzf (and the 
 92  standard Python library gzip) because they contain a function called open 
 93  i.e. Suppose you do this: 
 94   
 95  >>> from Bio.bgzf import * 
 96  >>> print(open.__module__) 
 97  Bio.bgzf 
 98   
 99  Or, 
100   
101  >>> from gzip import * 
102  >>> print(open.__module__) 
103  gzip 
104   
105  Notice that the open function has been replaced. You can "fix" this if you 
106  need to by importing the built-in open function: 
107   
108  >>> try: 
109  ...     from __builtin__ import open # Python 2 
110  ... except ImportError: 
111  ...     from builtins import open # Python 3 
112  ... 
113   
114  However, what we recommend instead is to use the explicit namespace, e.g. 
115   
116  >>> from Bio import bgzf 
117  >>> print(bgzf.open.__module__) 
118  Bio.bgzf 
119   
120   
121  Example 
122  ------- 
123   
124  This is an ordinary GenBank file compressed using BGZF, so it can 
125  be decompressed using gzip, 
126   
127  >>> import gzip 
128  >>> handle = gzip.open("GenBank/NC_000932.gb.bgz", "r") 
129  >>> assert 0 == handle.tell() 
130  >>> line = handle.readline() 
131  >>> assert 80 == handle.tell() 
132  >>> line = handle.readline() 
133  >>> assert 143 == handle.tell() 
134  >>> data = handle.read(70000) 
135  >>> assert 70143 == handle.tell() 
136  >>> handle.close() 
137   
138  We can also access the file using the BGZF reader - but pay 
139  attention to the file offsets which will be explained below: 
140   
141  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz", "r") 
142  >>> assert 0 == handle.tell() 
143  >>> print(handle.readline().rstrip()) 
144  LOCUS       NC_000932             154478 bp    DNA     circular PLN 15-APR-2009 
145  >>> assert 80 == handle.tell() 
146  >>> print(handle.readline().rstrip()) 
147  DEFINITION  Arabidopsis thaliana chloroplast, complete genome. 
148  >>> assert 143 == handle.tell() 
149  >>> data = handle.read(70000) 
150  >>> assert 987828735 == handle.tell() 
151  >>> print(handle.readline().rstrip()) 
152  f="GeneID:844718" 
153  >>> print(handle.readline().rstrip()) 
154       CDS             complement(join(84337..84771,85454..85843)) 
155  >>> offset = handle.seek(make_virtual_offset(55074, 126)) 
156  >>> print(handle.readline().rstrip()) 
157      68521 tatgtcattc gaaattgtat aaagacaact cctatttaat agagctattt gtgcaagtat 
158  >>> handle.close() 
159   
160  Notice the handle's offset looks different as a BGZF file. This 
161  brings us to the key point about BGZF, which is the block structure: 
162   
163  >>> handle = open("GenBank/NC_000932.gb.bgz", "rb") 
164  >>> for values in BgzfBlocks(handle): 
165  ...     print("Raw start %i, raw length %i; data start %i, data length %i" % values) 
166  Raw start 0, raw length 15073; data start 0, data length 65536 
167  Raw start 15073, raw length 17857; data start 65536, data length 65536 
168  Raw start 32930, raw length 22144; data start 131072, data length 65536 
169  Raw start 55074, raw length 22230; data start 196608, data length 65536 
170  Raw start 77304, raw length 14939; data start 262144, data length 43478 
171  Raw start 92243, raw length 28; data start 305622, data length 0 
172  >>> handle.close() 
173   
174  In this example the first three blocks are 'full' and hold 65536 bytes 
175  of uncompressed data. The fourth block isn't full and holds 43478 bytes. 
176  Finally there is a special empty fifth block which takes 28 bytes on 
177  disk and serves as an 'end of file' (EOF) marker. If this is missing, 
178  it is possible your BGZF file is incomplete. 
179   
180  By reading ahead 70,000 bytes we moved into the second BGZF block, 
181  and at that point the BGZF virtual offsets start to look different 
182  to a simple offset into the decompressed data as exposed by the gzip 
183  library. 
184   
185  As an example, consider seeking to the decompressed position 196734. 
186  Since 196734 = 65536 + 65536 + 65536 + 126 = 65536*3 + 126, this 
187  is equivalent to jumping the first three blocks (which in this 
188  specific example are all size 65536 after decompression - which 
189  does not always hold) and starting at byte 126 of the fourth block 
190  (after decompression). For BGZF, we need to know the fourth block's 
191  offset of 55074 and the offset within the block of 126 to get the 
192  BGZF virtual offset. 
193   
194  >>> print(55074 << 16 | 126) 
195  3609329790 
196  >>> print(bgzf.make_virtual_offset(55074, 126)) 
197  3609329790 
198   
199  Thus for this BGZF file, decompressed position 196734 corresponds 
200  to the virtual offset 3609329790. However, another BGZF file with 
201  different contents would have compressed more or less efficiently, 
202  so the compressed blocks would be different sizes. What this means 
203  is the mapping between the uncompressed offset and the compressed 
204  virtual offset depends on the BGZF file you are using. 
205   
206  If you are accessing a BGZF file via this module, just use the 
207  handle.tell() method to note the virtual offset of a position you 
208  may later want to return to using handle.seek(). 
209   
210  The catch with BGZF virtual offsets is while they can be compared 
211  (which offset comes first in the file), you cannot safely subtract 
212  them to get the size of the data between them, nor add/subtract 
213  a relative offset. 
214   
215  Of course you can parse this file with Bio.SeqIO using BgzfReader, 
216  although there isn't any benefit over using gzip.open(...), unless 
217  you want to index BGZF compressed sequence files: 
218   
219  >>> from Bio import SeqIO 
220  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz") 
221  >>> record = SeqIO.read(handle, "genbank") 
222  >>> handle.close() 
223  >>> print(record.id) 
224  NC_000932.1 
225   
226  """ 
227   
228  from __future__ import print_function 
229   
230  import sys # to detect when under Python 2 
231  import zlib 
232  import struct 
233   
234  from Bio._py3k import _as_bytes, _as_string 
235  from Bio._py3k import open as _open 
236   
237  #For Python 2 can just use: _bgzf_magic = '\x1f\x8b\x08\x04' 
238  #but need to use bytes on Python 3 
239  _bgzf_magic = b"\x1f\x8b\x08\x04" 
240  _bgzf_header = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00\x42\x43\x02\x00" 
241  _bgzf_eof = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00BC\x02\x00\x1b\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00" 
242  _bytes_BC = b"BC" 
243   
244   
245 -def open(filename, mode="rb"):
246 """Open a BGZF file for reading, writing or appending.""" 247 if "r" in mode.lower(): 248 return BgzfReader(filename, mode) 249 elif "w" in mode.lower() or "a" in mode.lower(): 250 return BgzfWriter(filename, mode) 251 else: 252 raise ValueError("Bad mode %r" % mode)
253 254
255 -def make_virtual_offset(block_start_offset, within_block_offset):
256 """Compute a BGZF virtual offset from block start and within block offsets. 257 258 The BAM indexing scheme records read positions using a 64 bit 259 'virtual offset', comprising in C terms: 260 261 block_start_offset<<16 | within_block_offset 262 263 Here block_start_offset is the file offset of the BGZF block 264 start (unsigned integer using up to 64-16 = 48 bits), and 265 within_block_offset within the (decompressed) block (unsigned 266 16 bit integer). 267 268 >>> make_virtual_offset(0, 0) 269 0 270 >>> make_virtual_offset(0, 1) 271 1 272 >>> make_virtual_offset(0, 2**16 - 1) 273 65535 274 >>> make_virtual_offset(0, 2**16) 275 Traceback (most recent call last): 276 ... 277 ValueError: Require 0 <= within_block_offset < 2**16, got 65536 278 279 >>> 65536 == make_virtual_offset(1, 0) 280 True 281 >>> 65537 == make_virtual_offset(1, 1) 282 True 283 >>> 131071 == make_virtual_offset(1, 2**16 - 1) 284 True 285 286 >>> 6553600000 == make_virtual_offset(100000, 0) 287 True 288 >>> 6553600001 == make_virtual_offset(100000, 1) 289 True 290 >>> 6553600010 == make_virtual_offset(100000, 10) 291 True 292 293 >>> make_virtual_offset(2**48, 0) 294 Traceback (most recent call last): 295 ... 296 ValueError: Require 0 <= block_start_offset < 2**48, got 281474976710656 297 298 """ 299 if within_block_offset < 0 or within_block_offset >= 65536: 300 raise ValueError("Require 0 <= within_block_offset < 2**16, got %i" % within_block_offset) 301 if block_start_offset < 0 or block_start_offset >= 281474976710656: 302 raise ValueError("Require 0 <= block_start_offset < 2**48, got %i" % block_start_offset) 303 return (block_start_offset<<16) | within_block_offset
304 305
306 -def split_virtual_offset(virtual_offset):
307 """Divides a 64-bit BGZF virtual offset into block start & within block offsets. 308 309 >>> (100000, 0) == split_virtual_offset(6553600000) 310 True 311 >>> (100000, 10) == split_virtual_offset(6553600010) 312 True 313 314 """ 315 start = virtual_offset>>16 316 return start, virtual_offset ^ (start<<16)
317 318
319 -def BgzfBlocks(handle):
320 """Low level debugging function to inspect BGZF blocks. 321 322 Expects a BGZF compressed file opened in binary read mode using 323 the builtin open function. Do not use a handle from this bgzf 324 module or the gzip module's open function which will decompress 325 the file. 326 327 Returns the block start offset (see virtual offsets), the block 328 length (add these for the start of the next block), and the 329 decompressed length of the blocks contents (limited to 65536 in 330 BGZF), as an iterator - one tuple per BGZF block. 331 332 >>> try: 333 ... from __builtin__ import open # Python 2 334 ... except ImportError: 335 ... from builtins import open # Python 3 336 ... 337 >>> handle = open("SamBam/ex1.bam", "rb") 338 >>> for values in BgzfBlocks(handle): 339 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 340 Raw start 0, raw length 18239; data start 0, data length 65536 341 Raw start 18239, raw length 18223; data start 65536, data length 65536 342 Raw start 36462, raw length 18017; data start 131072, data length 65536 343 Raw start 54479, raw length 17342; data start 196608, data length 65536 344 Raw start 71821, raw length 17715; data start 262144, data length 65536 345 Raw start 89536, raw length 17728; data start 327680, data length 65536 346 Raw start 107264, raw length 17292; data start 393216, data length 63398 347 Raw start 124556, raw length 28; data start 456614, data length 0 348 >>> handle.close() 349 350 Indirectly we can tell this file came from an old version of 351 samtools because all the blocks (except the final one and the 352 dummy empty EOF marker block) are 65536 bytes. Later versions 353 avoid splitting a read between two blocks, and give the header 354 its own block (useful to speed up replacing the header). You 355 can see this in ex1_refresh.bam created using samtools 0.1.18: 356 357 samtools view -b ex1.bam > ex1_refresh.bam 358 359 >>> handle = open("SamBam/ex1_refresh.bam", "rb") 360 >>> for values in BgzfBlocks(handle): 361 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 362 Raw start 0, raw length 53; data start 0, data length 38 363 Raw start 53, raw length 18195; data start 38, data length 65434 364 Raw start 18248, raw length 18190; data start 65472, data length 65409 365 Raw start 36438, raw length 18004; data start 130881, data length 65483 366 Raw start 54442, raw length 17353; data start 196364, data length 65519 367 Raw start 71795, raw length 17708; data start 261883, data length 65411 368 Raw start 89503, raw length 17709; data start 327294, data length 65466 369 Raw start 107212, raw length 17390; data start 392760, data length 63854 370 Raw start 124602, raw length 28; data start 456614, data length 0 371 >>> handle.close() 372 373 The above example has no embedded SAM header (thus the first block 374 is very small at just 38 bytes of decompressed data), while the next 375 example does (a larger block of 103 bytes). Notice that the rest of 376 the blocks show the same sizes (they contain the same read data): 377 378 >>> handle = open("SamBam/ex1_header.bam", "rb") 379 >>> for values in BgzfBlocks(handle): 380 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 381 Raw start 0, raw length 104; data start 0, data length 103 382 Raw start 104, raw length 18195; data start 103, data length 65434 383 Raw start 18299, raw length 18190; data start 65537, data length 65409 384 Raw start 36489, raw length 18004; data start 130946, data length 65483 385 Raw start 54493, raw length 17353; data start 196429, data length 65519 386 Raw start 71846, raw length 17708; data start 261948, data length 65411 387 Raw start 89554, raw length 17709; data start 327359, data length 65466 388 Raw start 107263, raw length 17390; data start 392825, data length 63854 389 Raw start 124653, raw length 28; data start 456679, data length 0 390 >>> handle.close() 391 392 """ 393 data_start = 0 394 while True: 395 start_offset = handle.tell() 396 #This may raise StopIteration which is perfect here 397 block_length, data = _load_bgzf_block(handle) 398 data_len = len(data) 399 yield start_offset, block_length, data_start, data_len 400 data_start += data_len
401 402
403 -def _load_bgzf_block(handle, text_mode=False):
404 """Internal function to load the next BGZF function (PRIVATE).""" 405 magic = handle.read(4) 406 if not magic: 407 #End of file 408 raise StopIteration 409 if magic != _bgzf_magic: 410 raise ValueError(r"A BGZF (e.g. a BAM file) block should start with " 411 r"%r, not %r; handle.tell() now says %r" 412 % (_bgzf_magic, magic, handle.tell())) 413 gzip_mod_time, gzip_extra_flags, gzip_os, extra_len = \ 414 struct.unpack("<LBBH", handle.read(8)) 415 416 block_size = None 417 x_len = 0 418 while x_len < extra_len: 419 subfield_id = handle.read(2) 420 subfield_len = struct.unpack("<H", handle.read(2))[0] # uint16_t 421 subfield_data = handle.read(subfield_len) 422 x_len += subfield_len + 4 423 if subfield_id == _bytes_BC: 424 assert subfield_len == 2, "Wrong BC payload length" 425 assert block_size is None, "Two BC subfields?" 426 block_size = struct.unpack("<H", subfield_data)[0] + 1 # uint16_t 427 assert x_len == extra_len, (x_len, extra_len) 428 assert block_size is not None, "Missing BC, this isn't a BGZF file!" 429 #Now comes the compressed data, CRC, and length of uncompressed data. 430 deflate_size = block_size - 1 - extra_len - 19 431 d = zlib.decompressobj(-15) # Negative window size means no headers 432 data = d.decompress(handle.read(deflate_size)) + d.flush() 433 expected_crc = handle.read(4) 434 expected_size = struct.unpack("<I", handle.read(4))[0] 435 assert expected_size == len(data), \ 436 "Decompressed to %i, not %i" % (len(data), expected_size) 437 #Should cope with a mix of Python platforms... 438 crc = zlib.crc32(data) 439 if crc < 0: 440 crc = struct.pack("<i", crc) 441 else: 442 crc = struct.pack("<I", crc) 443 assert expected_crc == crc, \ 444 "CRC is %s, not %s" % (crc, expected_crc) 445 if text_mode: 446 return block_size, _as_string(data) 447 else: 448 return block_size, data
449 450
451 -class BgzfReader(object):
452 r"""BGZF reader, acts like a read only handle but seek/tell differ. 453 454 Let's use the BgzfBlocks function to have a peak at the BGZF blocks 455 in an example BAM file, 456 457 >>> try: 458 ... from __builtin__ import open # Python 2 459 ... except ImportError: 460 ... from builtins import open # Python 3 461 ... 462 >>> handle = open("SamBam/ex1.bam", "rb") 463 >>> for values in BgzfBlocks(handle): 464 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 465 Raw start 0, raw length 18239; data start 0, data length 65536 466 Raw start 18239, raw length 18223; data start 65536, data length 65536 467 Raw start 36462, raw length 18017; data start 131072, data length 65536 468 Raw start 54479, raw length 17342; data start 196608, data length 65536 469 Raw start 71821, raw length 17715; data start 262144, data length 65536 470 Raw start 89536, raw length 17728; data start 327680, data length 65536 471 Raw start 107264, raw length 17292; data start 393216, data length 63398 472 Raw start 124556, raw length 28; data start 456614, data length 0 473 >>> handle.close() 474 475 Now let's see how to use this block information to jump to 476 specific parts of the decompressed BAM file: 477 478 >>> handle = BgzfReader("SamBam/ex1.bam", "rb") 479 >>> assert 0 == handle.tell() 480 >>> magic = handle.read(4) 481 >>> assert 4 == handle.tell() 482 483 So far nothing so strange, we got the magic marker used at the 484 start of a decompressed BAM file, and the handle position makes 485 sense. Now however, let's jump to the end of this block and 4 486 bytes into the next block by reading 65536 bytes, 487 488 >>> data = handle.read(65536) 489 >>> len(data) 490 65536 491 >>> assert 1195311108 == handle.tell() 492 493 Expecting 4 + 65536 = 65540 were you? Well this is a BGZF 64-bit 494 virtual offset, which means: 495 496 >>> split_virtual_offset(1195311108) 497 (18239, 4) 498 499 You should spot 18239 as the start of the second BGZF block, while 500 the 4 is the offset into this block. See also make_virtual_offset, 501 502 >>> make_virtual_offset(18239, 4) 503 1195311108 504 505 Let's jump back to almost the start of the file, 506 507 >>> make_virtual_offset(0, 2) 508 2 509 >>> handle.seek(2) 510 2 511 >>> handle.close() 512 513 Note that you can use the max_cache argument to limit the number of 514 BGZF blocks cached in memory. The default is 100, and since each 515 block can be up to 64kb, the default cache could take up to 6MB of 516 RAM. The cache is not important for reading through the file in one 517 pass, but is important for improving performance of random access. 518 """ 519
520 - def __init__(self, filename=None, mode="r", fileobj=None, max_cache=100):
521 #TODO - Assuming we can seek, check for 28 bytes EOF empty block 522 #and if missing warn about possible truncation (as in samtools)? 523 if max_cache < 1: 524 raise ValueError("Use max_cache with a minimum of 1") 525 #Must open the BGZF file in binary mode, but we may want to 526 #treat the contents as either text or binary (unicode or 527 #bytes under Python 3) 528 if fileobj: 529 assert filename is None 530 handle = fileobj 531 assert "b" in handle.mode.lower() 532 else: 533 if "w" in mode.lower() \ 534 or "a" in mode.lower(): 535 raise ValueError("Must use read mode (default), not write or append mode") 536 handle = _open(filename, "rb") 537 self._text = "b" not in mode.lower() 538 if self._text: 539 self._newline = "\n" 540 else: 541 self._newline = b"\n" 542 self._handle = handle 543 self.max_cache = max_cache 544 self._buffers = {} 545 self._block_start_offset = None 546 self._block_raw_length = None 547 self._load_block(handle.tell())
548
549 - def _load_block(self, start_offset=None):
550 if start_offset is None: 551 #If the file is being read sequentially, then _handle.tell() 552 #should be pointing at the start of the next block. 553 #However, if seek has been used, we can't assume that. 554 start_offset = self._block_start_offset + self._block_raw_length 555 if start_offset == self._block_start_offset: 556 self._within_block_offset = 0 557 return 558 elif start_offset in self._buffers: 559 #Already in cache 560 self._buffer, self._block_raw_length = self._buffers[start_offset] 561 self._within_block_offset = 0 562 self._block_start_offset = start_offset 563 return 564 #Must hit the disk... first check cache limits, 565 while len(self._buffers) >= self.max_cache: 566 #TODO - Implemente LRU cache removal? 567 self._buffers.popitem() 568 #Now load the block 569 handle = self._handle 570 if start_offset is not None: 571 handle.seek(start_offset) 572 self._block_start_offset = handle.tell() 573 try: 574 block_size, self._buffer = _load_bgzf_block(handle, self._text) 575 except StopIteration: 576 #EOF 577 block_size = 0 578 if self._text: 579 self._buffer = "" 580 else: 581 self._buffer = b"" 582 self._within_block_offset = 0 583 self._block_raw_length = block_size 584 #Finally save the block in our cache, 585 self._buffers[self._block_start_offset] = self._buffer, block_size
586
587 - def tell(self):
588 """Returns a 64-bit unsigned BGZF virtual offset.""" 589 if 0 < self._within_block_offset == len(self._buffer): 590 #Special case where we're right at the end of a (non empty) block. 591 #For non-maximal blocks could give two possible virtual offsets, 592 #but for a maximal block can't use 65536 as the within block 593 #offset. Therefore for consistency, use the next block and a 594 #within block offset of zero. 595 return (self._block_start_offset + self._block_raw_length) << 16 596 else: 597 #return make_virtual_offset(self._block_start_offset, 598 # self._within_block_offset) 599 #TODO - Include bounds checking as in make_virtual_offset? 600 return (self._block_start_offset<<16) | self._within_block_offset
601
602 - def seek(self, virtual_offset):
603 """Seek to a 64-bit unsigned BGZF virtual offset.""" 604 #Do this inline to avoid a function call, 605 #start_offset, within_block = split_virtual_offset(virtual_offset) 606 start_offset = virtual_offset>>16 607 within_block = virtual_offset ^ (start_offset<<16) 608 if start_offset != self._block_start_offset: 609 #Don't need to load the block if already there 610 #(this avoids a function call since _load_block would do nothing) 611 self._load_block(start_offset) 612 assert start_offset == self._block_start_offset 613 if within_block > len(self._buffer) \ 614 and not (within_block == 0 and len(self._buffer)==0): 615 raise ValueError("Within offset %i but block size only %i" 616 % (within_block, len(self._buffer))) 617 self._within_block_offset = within_block 618 #assert virtual_offset == self.tell(), \ 619 # "Did seek to %i (%i, %i), but tell says %i (%i, %i)" \ 620 # % (virtual_offset, start_offset, within_block, 621 # self.tell(), self._block_start_offset, self._within_block_offset) 622 return virtual_offset
623
624 - def read(self, size=-1):
625 if size < 0: 626 raise NotImplementedError("Don't be greedy, that could be massive!") 627 elif size == 0: 628 if self._text: 629 return "" 630 else: 631 return b"" 632 elif self._within_block_offset + size <= len(self._buffer): 633 #This may leave us right at the end of a block 634 #(lazy loading, don't load the next block unless we have too) 635 data = self._buffer[self._within_block_offset:self._within_block_offset + size] 636 self._within_block_offset += size 637 assert data # Must be at least 1 byte 638 return data 639 else: 640 data = self._buffer[self._within_block_offset:] 641 size -= len(data) 642 self._load_block() # will reset offsets 643 #TODO - Test with corner case of an empty block followed by 644 #a non-empty block 645 if not self._buffer: 646 return data # EOF 647 elif size: 648 #TODO - Avoid recursion 649 return data + self.read(size) 650 else: 651 #Only needed the end of the last block 652 return data
653
654 - def readline(self):
655 i = self._buffer.find(self._newline, self._within_block_offset) 656 #Three cases to consider, 657 if i==-1: 658 #No newline, need to read in more data 659 data = self._buffer[self._within_block_offset:] 660 self._load_block() # will reset offsets 661 if not self._buffer: 662 return data # EOF 663 else: 664 #TODO - Avoid recursion 665 return data + self.readline() 666 elif i + 1 == len(self._buffer): 667 #Found new line, but right at end of block (SPECIAL) 668 data = self._buffer[self._within_block_offset:] 669 #Must now load the next block to ensure tell() works 670 self._load_block() # will reset offsets 671 assert data 672 return data 673 else: 674 #Found new line, not at end of block (easy case, no IO) 675 data = self._buffer[self._within_block_offset:i+1] 676 self._within_block_offset = i + 1 677 #assert data.endswith(self._newline) 678 return data
679
680 - def __next__(self):
681 line = self.readline() 682 if not line: 683 raise StopIteration 684 return line
685 686 if sys.version_info[0] < 3:
687 - def next(self):
688 """Python 2 style alias for Python 3 style __next__ method.""" 689 return self.__next__()
690
691 - def __iter__(self):
692 return self
693
694 - def close(self):
695 self._handle.close() 696 self._buffer = None 697 self._block_start_offset = None 698 self._buffers = None
699
700 - def seekable(self):
701 return True
702
703 - def isatty(self):
704 return False
705
706 - def fileno(self):
707 return self._handle.fileno()
708
709 - def __enter__(self):
710 return self
711
712 - def __exit__(self, type, value, traceback):
713 self.close()
714 715
716 -class BgzfWriter(object):
717
718 - def __init__(self, filename=None, mode="w", fileobj=None, compresslevel=6):
719 if fileobj: 720 assert filename is None 721 handle = fileobj 722 else: 723 if "w" not in mode.lower() \ 724 and "a" not in mode.lower(): 725 raise ValueError("Must use write or append mode, not %r" % mode) 726 if "a" in mode.lower(): 727 handle = _open(filename, "ab") 728 else: 729 handle = _open(filename, "wb") 730 self._text = "b" not in mode.lower() 731 self._handle = handle 732 self._buffer = b"" 733 self.compresslevel = compresslevel
734
735 - def _write_block(self, block):
736 #print("Saving %i bytes" % len(block)) 737 start_offset = self._handle.tell() 738 assert len(block) <= 65536 739 #Giving a negative window bits means no gzip/zlib headers, -15 used in samtools 740 c = zlib.compressobj(self.compresslevel, 741 zlib.DEFLATED, 742 -15, 743 zlib.DEF_MEM_LEVEL, 744 0) 745 compressed = c.compress(block) + c.flush() 746 del c 747 assert len(compressed) < 65536, "TODO - Didn't compress enough, try less data in this block" 748 crc = zlib.crc32(block) 749 #Should cope with a mix of Python platforms... 750 if crc < 0: 751 crc = struct.pack("<i", crc) 752 else: 753 crc = struct.pack("<I", crc) 754 bsize = struct.pack("<H", len(compressed)+25) # includes -1 755 crc = struct.pack("<I", zlib.crc32(block) & 0xffffffff) 756 uncompressed_length = struct.pack("<I", len(block)) 757 #Fixed 16 bytes, 758 # gzip magic bytes (4) mod time (4), 759 # gzip flag (1), os (1), extra length which is six (2), 760 # sub field which is BC (2), sub field length of two (2), 761 #Variable data, 762 #2 bytes: block length as BC sub field (2) 763 #X bytes: the data 764 #8 bytes: crc (4), uncompressed data length (4) 765 data = _bgzf_header + bsize + compressed + crc + uncompressed_length 766 self._handle.write(data)
767
768 - def write(self, data):
769 #TODO - Check bytes vs unicode 770 data = _as_bytes(data) 771 #block_size = 2**16 = 65536 772 data_len = len(data) 773 if len(self._buffer) + data_len < 65536: 774 #print("Cached %r" % data) 775 self._buffer += data 776 return 777 else: 778 #print("Got %r, writing out some data..." % data) 779 self._buffer += data 780 while len(self._buffer) >= 65536: 781 self._write_block(self._buffer[:65536]) 782 self._buffer = self._buffer[65536:]
783
784 - def flush(self):
785 while len(self._buffer) >= 65536: 786 self._write_block(self._buffer[:65535]) 787 self._buffer = self._buffer[65535:] 788 self._write_block(self._buffer) 789 self._buffer = b"" 790 self._handle.flush()
791
792 - def close(self):
793 """Flush data, write 28 bytes empty BGZF EOF marker, and close the BGZF file.""" 794 if self._buffer: 795 self.flush() 796 #samtools will look for a magic EOF marker, just a 28 byte empty BGZF block, 797 #and if it is missing warns the BAM file may be truncated. In addition to 798 #samtools writing this block, so too does bgzip - so we should too. 799 self._handle.write(_bgzf_eof) 800 self._handle.flush() 801 self._handle.close()
802
803 - def tell(self):
804 """Returns a BGZF 64-bit virtual offset.""" 805 return make_virtual_offset(self._handle.tell(), len(self._buffer))
806
807 - def seekable(self):
808 #Not seekable, but we do support tell... 809 return False
810
811 - def isatty(self):
812 return False
813
814 - def fileno(self):
815 return self._handle.fileno()
816
817 - def __enter__(self):
818 return self
819
820 - def __exit__(self, type, value, traceback):
821 self.close()
822 823 824 if __name__ == "__main__": 825 import sys 826 if len(sys.argv) > 1: 827 print("Call this with no arguments and pipe uncompressed data in on stdin") 828 print("and it will produce BGZF compressed data on stdout. e.g.") 829 print("") 830 print("./bgzf.py < example.fastq > example.fastq.bgz") 831 print("") 832 print("The extension convention of *.bgz is to distinugish these from *.gz") 833 print("used for standard gzipped files without the block structure of BGZF.") 834 print("You can use the standard gunzip command to decompress BGZF files,") 835 print("if it complains about the extension try something like this:") 836 print("") 837 print("cat example.fastq.bgz | gunzip > example.fastq") 838 print("") 839 print("See also the tool bgzip that comes with samtools") 840 sys.exit(0) 841 842 sys.stderr.write("Producing BGZF output from stdin...\n") 843 w = BgzfWriter(fileobj=sys.stdout) 844 while True: 845 data = sys.stdin.read(65536) 846 w.write(data) 847 if not data: 848 break 849 #Doing close with write an empty BGZF block as EOF marker: 850 w.close() 851 sys.stderr.write("BGZF data produced\n") 852