Package Bio :: Module bgzf
[hide private]
[frames] | no frames]

Source Code for Module Bio.bgzf

  1  #!/usr/bin/env python 
  2  # Copyright 2010-2013 by Peter Cock. 
  3  # All rights reserved. 
  4  # This code is part of the Biopython distribution and governed by its 
  5  # license.  Please see the LICENSE file that should have been included 
  6  # as part of this package. 
  7  r"""Read and write BGZF compressed files (the GZIP variant used in BAM). 
  8   
  9  The SAM/BAM file format (Sequence Alignment/Map) comes in a plain text 
 10  format (SAM), and a compressed binary format (BAM). The latter uses a 
 11  modified form of gzip compression called BGZF (Blocked GNU Zip Format), 
 12  which can be applied to any file format to provide compression with 
 13  efficient random access. BGZF is described together with the SAM/BAM 
 14  file format at http://samtools.sourceforge.net/SAM1.pdf 
 15   
 16  Please read the text below about 'virtual offsets' before using BGZF 
 17  files for random access. 
 18   
 19   
 20  Aim of this module 
 21  ------------------ 
 22   
 23  The Python gzip library can be used to read BGZF files, since for 
 24  decompression they are just (specialised) gzip files. What this 
 25  module aims to facilitate is random access to BGZF files (using the 
 26  'virtual offset' idea), and writing BGZF files (which means using 
 27  suitably sized gzip blocks and writing the extra 'BC' field in the 
 28  gzip headers). As in the gzip library, the zlib library is used 
 29  internally. 
 30   
 31  In addition to being required for random access to and writing of 
 32  BAM files, the BGZF format can also be used on other sequential 
 33  data (in the sense of one record after another), such as most of 
 34  the sequence data formats supported in Bio.SeqIO (like FASTA, 
 35  FASTQ, GenBank, etc) or large MAF alignments. 
 36   
 37  The Bio.SeqIO indexing functions use this module to support BGZF files. 
 38   
 39   
 40  Technical Introduction to BGZF 
 41  ------------------------------ 
 42   
 43  The gzip file format allows multiple compressed blocks, each of which 
 44  could be a stand alone gzip file. As an interesting bonus, this means 
 45  you can use Unix "cat" to combined to gzip files into one by 
 46  concatenating them. Also, each block can have one of several compression 
 47  levels (including uncompressed, which actually takes up a little bit 
 48  more space due to the gzip header). 
 49   
 50  What the BAM designers realised was that while random access to data 
 51  stored in traditional gzip files was slow, breaking the file into 
 52  gzip blocks would allow fast random access to each block. To access 
 53  a particular piece of the decompressed data, you just need to know 
 54  which block it starts in (the offset of the gzip block start), and 
 55  how far into the (decompressed) contents of the block you need to 
 56  read. 
 57   
 58  One problem with this is finding the gzip block sizes efficiently. 
 59  You can do it with a standard gzip file, but it requires every block 
 60  to be decompressed -- and that would be rather slow. Additionally 
 61  typical gzip files may use very large blocks. 
 62   
 63  All that differs in BGZF is that compressed size of each gzip block 
 64  is limited to 2^16 bytes, and an extra 'BC' field in the gzip header 
 65  records this size. Traditional decompression tools can ignore this, 
 66  and unzip the file just like any other gzip file. 
 67   
 68  The point of this is you can look at the first BGZF block, find out 
 69  how big it is from this 'BC' header, and thus seek immediately to 
 70  the second block, and so on. 
 71   
 72  The BAM indexing scheme records read positions using a 64 bit 
 73  'virtual offset', comprising coffset<<16|uoffset, where coffset is 
 74  the file offset of the BGZF block containing the start of the read 
 75  (unsigned integer using up to 64-16 = 48 bits), and uoffset is the 
 76  offset within the (decompressed) block (unsigned 16 bit integer). 
 77   
 78  This limits you to BAM files where the last block starts by 2^48 
 79  bytes, or 256 petabytes, and the decompressed size of each block 
 80  is at most 2^16 bytes, or 64kb. Note that this matches the BGZF 
 81  'BC' field size which limits the compressed size of each block to 
 82  2^16 bytes, allowing for BAM files to use BGZF with no gzip 
 83  compression (useful for intermediate files in memory to reduced 
 84  CPU load). 
 85   
 86   
 87  Warning about namespaces 
 88  ------------------------ 
 89   
 90  It is considered a bad idea to use "from XXX import *" in Python, because 
 91  it pollutes the namespace. This is a real issue with Bio.bgzf (and the 
 92  standard Python library gzip) because they contain a function called open 
 93  i.e. Suppose you do this: 
 94   
 95  >>> from Bio.bgzf import * 
 96  >>> print(open.__module__) 
 97  Bio.bgzf 
 98   
 99  Or, 
100   
101  >>> from gzip import * 
102  >>> print(open.__module__) 
103  gzip 
104   
105  Notice that the open function has been replaced. You can "fix" this if you 
106  need to by importing the built-in open function: 
107   
108  >>> try: 
109  ...     from __builtin__ import open # Python 2 
110  ... except ImportError: 
111  ...     from builtins import open # Python 3 
112  ... 
113   
114  However, what we recommend instead is to use the explicit namespace, e.g. 
115   
116  >>> from Bio import bgzf 
117  >>> print(bgzf.open.__module__) 
118  Bio.bgzf 
119   
120   
121  Example 
122  ------- 
123   
124  This is an ordinary GenBank file compressed using BGZF, so it can 
125  be decompressed using gzip, 
126   
127  >>> import gzip 
128  >>> handle = gzip.open("GenBank/NC_000932.gb.bgz", "r") 
129  >>> assert 0 == handle.tell() 
130  >>> line = handle.readline() 
131  >>> assert 80 == handle.tell() 
132  >>> line = handle.readline() 
133  >>> assert 143 == handle.tell() 
134  >>> data = handle.read(70000) 
135  >>> assert 70143 == handle.tell() 
136  >>> handle.close() 
137   
138  We can also access the file using the BGZF reader - but pay 
139  attention to the file offsets which will be explained below: 
140   
141  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz", "r") 
142  >>> assert 0 == handle.tell() 
143  >>> print(handle.readline().rstrip()) 
144  LOCUS       NC_000932             154478 bp    DNA     circular PLN 15-APR-2009 
145  >>> assert 80 == handle.tell() 
146  >>> print(handle.readline().rstrip()) 
147  DEFINITION  Arabidopsis thaliana chloroplast, complete genome. 
148  >>> assert 143 == handle.tell() 
149  >>> data = handle.read(70000) 
150  >>> assert 987828735 == handle.tell() 
151  >>> print(handle.readline().rstrip()) 
152  f="GeneID:844718" 
153  >>> print(handle.readline().rstrip()) 
154       CDS             complement(join(84337..84771,85454..85843)) 
155  >>> offset = handle.seek(make_virtual_offset(55074, 126)) 
156  >>> print(handle.readline().rstrip()) 
157      68521 tatgtcattc gaaattgtat aaagacaact cctatttaat agagctattt gtgcaagtat 
158  >>> handle.close() 
159   
160  Notice the handle's offset looks different as a BGZF file. This 
161  brings us to the key point about BGZF, which is the block structure: 
162   
163  >>> handle = open("GenBank/NC_000932.gb.bgz", "rb") 
164  >>> for values in BgzfBlocks(handle): 
165  ...     print("Raw start %i, raw length %i; data start %i, data length %i" % values) 
166  Raw start 0, raw length 15073; data start 0, data length 65536 
167  Raw start 15073, raw length 17857; data start 65536, data length 65536 
168  Raw start 32930, raw length 22144; data start 131072, data length 65536 
169  Raw start 55074, raw length 22230; data start 196608, data length 65536 
170  Raw start 77304, raw length 14939; data start 262144, data length 43478 
171  Raw start 92243, raw length 28; data start 305622, data length 0 
172  >>> handle.close() 
173   
174  In this example the first three blocks are 'full' and hold 65536 bytes 
175  of uncompressed data. The fourth block isn't full and holds 43478 bytes. 
176  Finally there is a special empty fifth block which takes 28 bytes on 
177  disk and serves as an 'end of file' (EOF) marker. If this is missing, 
178  it is possible your BGZF file is incomplete. 
179   
180  By reading ahead 70,000 bytes we moved into the second BGZF block, 
181  and at that point the BGZF virtual offsets start to look different 
182  to a simple offset into the decompressed data as exposed by the gzip 
183  library. 
184   
185  As an example, consider seeking to the decompressed position 196734. 
186  Since 196734 = 65536 + 65536 + 65536 + 126 = 65536*3 + 126, this 
187  is equivalent to jumping the first three blocks (which in this 
188  specific example are all size 65536 after decompression - which 
189  does not always hold) and starting at byte 126 of the fourth block 
190  (after decompression). For BGZF, we need to know the fourth block's 
191  offset of 55074 and the offset within the block of 126 to get the 
192  BGZF virtual offset. 
193   
194  >>> print(55074 << 16 | 126) 
195  3609329790 
196  >>> print(bgzf.make_virtual_offset(55074, 126)) 
197  3609329790 
198   
199  Thus for this BGZF file, decompressed position 196734 corresponds 
200  to the virtual offset 3609329790. However, another BGZF file with 
201  different contents would have compressed more or less efficiently, 
202  so the compressed blocks would be different sizes. What this means 
203  is the mapping between the uncompressed offset and the compressed 
204  virtual offset depends on the BGZF file you are using. 
205   
206  If you are accessing a BGZF file via this module, just use the 
207  handle.tell() method to note the virtual offset of a position you 
208  may later want to return to using handle.seek(). 
209   
210  The catch with BGZF virtual offsets is while they can be compared 
211  (which offset comes first in the file), you cannot safely subtract 
212  them to get the size of the data between them, nor add/subtract 
213  a relative offset. 
214   
215  Of course you can parse this file with Bio.SeqIO using BgzfReader, 
216  although there isn't any benefit over using gzip.open(...), unless 
217  you want to index BGZF compressed sequence files: 
218   
219  >>> from Bio import SeqIO 
220  >>> handle = BgzfReader("GenBank/NC_000932.gb.bgz") 
221  >>> record = SeqIO.read(handle, "genbank") 
222  >>> handle.close() 
223  >>> print(record.id) 
224  NC_000932.1 
225   
226  """ 
227   
228  from __future__ import print_function 
229   
230  import sys # to detect when under Python 2 
231  import zlib 
232  import struct 
233   
234  from Bio._py3k import _as_bytes, _as_string 
235  from Bio._py3k import open as _open 
236   
237  #For Python 2 can just use: _bgzf_magic = '\x1f\x8b\x08\x04' 
238  #but need to use bytes on Python 3 
239  _bgzf_magic = b"\x1f\x8b\x08\x04" 
240  _bgzf_header = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00\x42\x43\x02\x00" 
241  _bgzf_eof = b"\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00BC\x02\x00\x1b\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00" 
242  _bytes_BC = b"BC" 
243   
244   
245 -def open(filename, mode="rb"):
246 """Open a BGZF file for reading, writing or appending.""" 247 if "r" in mode.lower(): 248 return BgzfReader(filename, mode) 249 elif "w" in mode.lower() or "a" in mode.lower(): 250 return BgzfWriter(filename, mode) 251 else: 252 raise ValueError("Bad mode %r" % mode)
253 254
255 -def make_virtual_offset(block_start_offset, within_block_offset):
256 """Compute a BGZF virtual offset from block start and within block offsets. 257 258 The BAM indexing scheme records read positions using a 64 bit 259 'virtual offset', comprising in C terms: 260 261 block_start_offset<<16 | within_block_offset 262 263 Here block_start_offset is the file offset of the BGZF block 264 start (unsigned integer using up to 64-16 = 48 bits), and 265 within_block_offset within the (decompressed) block (unsigned 266 16 bit integer). 267 268 >>> make_virtual_offset(0, 0) 269 0 270 >>> make_virtual_offset(0, 1) 271 1 272 >>> make_virtual_offset(0, 2**16 - 1) 273 65535 274 >>> make_virtual_offset(0, 2**16) 275 Traceback (most recent call last): 276 ... 277 ValueError: Require 0 <= within_block_offset < 2**16, got 65536 278 279 >>> 65536 == make_virtual_offset(1, 0) 280 True 281 >>> 65537 == make_virtual_offset(1, 1) 282 True 283 >>> 131071 == make_virtual_offset(1, 2**16 - 1) 284 True 285 286 >>> 6553600000 == make_virtual_offset(100000, 0) 287 True 288 >>> 6553600001 == make_virtual_offset(100000, 1) 289 True 290 >>> 6553600010 == make_virtual_offset(100000, 10) 291 True 292 293 >>> make_virtual_offset(2**48, 0) 294 Traceback (most recent call last): 295 ... 296 ValueError: Require 0 <= block_start_offset < 2**48, got 281474976710656 297 298 """ 299 if within_block_offset < 0 or within_block_offset >= 65536: 300 raise ValueError("Require 0 <= within_block_offset < 2**16, got %i" % within_block_offset) 301 if block_start_offset < 0 or block_start_offset >= 281474976710656: 302 raise ValueError("Require 0 <= block_start_offset < 2**48, got %i" % block_start_offset) 303 return (block_start_offset<<16) | within_block_offset
304 305
306 -def split_virtual_offset(virtual_offset):
307 """Divides a 64-bit BGZF virtual offset into block start & within block offsets. 308 309 >>> (100000, 0) == split_virtual_offset(6553600000) 310 True 311 >>> (100000, 10) == split_virtual_offset(6553600010) 312 True 313 314 """ 315 start = virtual_offset>>16 316 return start, virtual_offset ^ (start<<16)
317 318
319 -def BgzfBlocks(handle):
320 """Low level debugging function to inspect BGZF blocks. 321 322 Returns the block start offset (see virtual offsets), the block 323 length (add these for the start of the next block), and the 324 decompressed length of the blocks contents (limited to 65536 in 325 BGZF). 326 327 >>> try: 328 ... from __builtin__ import open # Python 2 329 ... except ImportError: 330 ... from builtins import open # Python 3 331 ... 332 >>> handle = open("SamBam/ex1.bam", "rb") 333 >>> for values in BgzfBlocks(handle): 334 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 335 Raw start 0, raw length 18239; data start 0, data length 65536 336 Raw start 18239, raw length 18223; data start 65536, data length 65536 337 Raw start 36462, raw length 18017; data start 131072, data length 65536 338 Raw start 54479, raw length 17342; data start 196608, data length 65536 339 Raw start 71821, raw length 17715; data start 262144, data length 65536 340 Raw start 89536, raw length 17728; data start 327680, data length 65536 341 Raw start 107264, raw length 17292; data start 393216, data length 63398 342 Raw start 124556, raw length 28; data start 456614, data length 0 343 >>> handle.close() 344 345 Indirectly we can tell this file came from an old version of 346 samtools because all the blocks (except the final one and the 347 dummy empty EOF marker block) are 65536 bytes. Later versions 348 avoid splitting a read between two blocks, and give the header 349 its own block (useful to speed up replacing the header). You 350 can see this in ex1_refresh.bam created using samtools 0.1.18: 351 352 samtools view -b ex1.bam > ex1_refresh.bam 353 354 >>> handle = open("SamBam/ex1_refresh.bam", "rb") 355 >>> for values in BgzfBlocks(handle): 356 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 357 Raw start 0, raw length 53; data start 0, data length 38 358 Raw start 53, raw length 18195; data start 38, data length 65434 359 Raw start 18248, raw length 18190; data start 65472, data length 65409 360 Raw start 36438, raw length 18004; data start 130881, data length 65483 361 Raw start 54442, raw length 17353; data start 196364, data length 65519 362 Raw start 71795, raw length 17708; data start 261883, data length 65411 363 Raw start 89503, raw length 17709; data start 327294, data length 65466 364 Raw start 107212, raw length 17390; data start 392760, data length 63854 365 Raw start 124602, raw length 28; data start 456614, data length 0 366 >>> handle.close() 367 368 The above example has no embedded SAM header (thus the first block 369 is very small at just 38 bytes of decompressed data), while the next 370 example does (a larger block of 103 bytes). Notice that the rest of 371 the blocks show the same sizes (they contain the same read data): 372 373 >>> handle = open("SamBam/ex1_header.bam", "rb") 374 >>> for values in BgzfBlocks(handle): 375 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 376 Raw start 0, raw length 104; data start 0, data length 103 377 Raw start 104, raw length 18195; data start 103, data length 65434 378 Raw start 18299, raw length 18190; data start 65537, data length 65409 379 Raw start 36489, raw length 18004; data start 130946, data length 65483 380 Raw start 54493, raw length 17353; data start 196429, data length 65519 381 Raw start 71846, raw length 17708; data start 261948, data length 65411 382 Raw start 89554, raw length 17709; data start 327359, data length 65466 383 Raw start 107263, raw length 17390; data start 392825, data length 63854 384 Raw start 124653, raw length 28; data start 456679, data length 0 385 >>> handle.close() 386 387 """ 388 data_start = 0 389 while True: 390 start_offset = handle.tell() 391 #This may raise StopIteration which is perfect here 392 block_length, data = _load_bgzf_block(handle) 393 data_len = len(data) 394 yield start_offset, block_length, data_start, data_len 395 data_start += data_len
396 397
398 -def _load_bgzf_block(handle, text_mode=False):
399 """Internal function to load the next BGZF function (PRIVATE).""" 400 magic = handle.read(4) 401 if not magic: 402 #End of file 403 raise StopIteration 404 if magic != _bgzf_magic: 405 raise ValueError(r"A BGZF (e.g. a BAM file) block should start with " 406 r"%r, not %r; handle.tell() now says %r" 407 % (_bgzf_magic, magic, handle.tell())) 408 gzip_mod_time, gzip_extra_flags, gzip_os, extra_len = \ 409 struct.unpack("<LBBH", handle.read(8)) 410 411 block_size = None 412 x_len = 0 413 while x_len < extra_len: 414 subfield_id = handle.read(2) 415 subfield_len = struct.unpack("<H", handle.read(2))[0] # uint16_t 416 subfield_data = handle.read(subfield_len) 417 x_len += subfield_len + 4 418 if subfield_id == _bytes_BC: 419 assert subfield_len == 2, "Wrong BC payload length" 420 assert block_size is None, "Two BC subfields?" 421 block_size = struct.unpack("<H", subfield_data)[0] + 1 # uint16_t 422 assert x_len == extra_len, (x_len, extra_len) 423 assert block_size is not None, "Missing BC, this isn't a BGZF file!" 424 #Now comes the compressed data, CRC, and length of uncompressed data. 425 deflate_size = block_size - 1 - extra_len - 19 426 d = zlib.decompressobj(-15) # Negative window size means no headers 427 data = d.decompress(handle.read(deflate_size)) + d.flush() 428 expected_crc = handle.read(4) 429 expected_size = struct.unpack("<I", handle.read(4))[0] 430 assert expected_size == len(data), \ 431 "Decompressed to %i, not %i" % (len(data), expected_size) 432 #Should cope with a mix of Python platforms... 433 crc = zlib.crc32(data) 434 if crc < 0: 435 crc = struct.pack("<i", crc) 436 else: 437 crc = struct.pack("<I", crc) 438 assert expected_crc == crc, \ 439 "CRC is %s, not %s" % (crc, expected_crc) 440 if text_mode: 441 return block_size, _as_string(data) 442 else: 443 return block_size, data
444 445
446 -class BgzfReader(object):
447 r"""BGZF reader, acts like a read only handle but seek/tell differ. 448 449 Let's use the BgzfBlocks function to have a peak at the BGZF blocks 450 in an example BAM file, 451 452 >>> try: 453 ... from __builtin__ import open # Python 2 454 ... except ImportError: 455 ... from builtins import open # Python 3 456 ... 457 >>> handle = open("SamBam/ex1.bam", "rb") 458 >>> for values in BgzfBlocks(handle): 459 ... print("Raw start %i, raw length %i; data start %i, data length %i" % values) 460 Raw start 0, raw length 18239; data start 0, data length 65536 461 Raw start 18239, raw length 18223; data start 65536, data length 65536 462 Raw start 36462, raw length 18017; data start 131072, data length 65536 463 Raw start 54479, raw length 17342; data start 196608, data length 65536 464 Raw start 71821, raw length 17715; data start 262144, data length 65536 465 Raw start 89536, raw length 17728; data start 327680, data length 65536 466 Raw start 107264, raw length 17292; data start 393216, data length 63398 467 Raw start 124556, raw length 28; data start 456614, data length 0 468 >>> handle.close() 469 470 Now let's see how to use this block information to jump to 471 specific parts of the decompressed BAM file: 472 473 >>> handle = BgzfReader("SamBam/ex1.bam", "rb") 474 >>> assert 0 == handle.tell() 475 >>> magic = handle.read(4) 476 >>> assert 4 == handle.tell() 477 478 So far nothing so strange, we got the magic marker used at the 479 start of a decompressed BAM file, and the handle position makes 480 sense. Now however, let's jump to the end of this block and 4 481 bytes into the next block by reading 65536 bytes, 482 483 >>> data = handle.read(65536) 484 >>> len(data) 485 65536 486 >>> assert 1195311108 == handle.tell() 487 488 Expecting 4 + 65536 = 65540 were you? Well this is a BGZF 64-bit 489 virtual offset, which means: 490 491 >>> split_virtual_offset(1195311108) 492 (18239, 4) 493 494 You should spot 18239 as the start of the second BGZF block, while 495 the 4 is the offset into this block. See also make_virtual_offset, 496 497 >>> make_virtual_offset(18239, 4) 498 1195311108 499 500 Let's jump back to almost the start of the file, 501 502 >>> make_virtual_offset(0, 2) 503 2 504 >>> handle.seek(2) 505 2 506 >>> handle.close() 507 508 Note that you can use the max_cache argument to limit the number of 509 BGZF blocks cached in memory. The default is 100, and since each 510 block can be up to 64kb, the default cache could take up to 6MB of 511 RAM. The cache is not important for reading through the file in one 512 pass, but is important for improving performance of random access. 513 """ 514
515 - def __init__(self, filename=None, mode="r", fileobj=None, max_cache=100):
516 #TODO - Assuming we can seek, check for 28 bytes EOF empty block 517 #and if missing warn about possible truncation (as in samtools)? 518 if max_cache < 1: 519 raise ValueError("Use max_cache with a minimum of 1") 520 #Must open the BGZF file in binary mode, but we may want to 521 #treat the contents as either text or binary (unicode or 522 #bytes under Python 3) 523 if fileobj: 524 assert filename is None 525 handle = fileobj 526 assert "b" in handle.mode.lower() 527 else: 528 if "w" in mode.lower() \ 529 or "a" in mode.lower(): 530 raise ValueError("Must use read mode (default), not write or append mode") 531 handle = _open(filename, "rb") 532 self._text = "b" not in mode.lower() 533 if self._text: 534 self._newline = "\n" 535 else: 536 self._newline = b"\n" 537 self._handle = handle 538 self.max_cache = max_cache 539 self._buffers = {} 540 self._block_start_offset = None 541 self._block_raw_length = None 542 self._load_block(handle.tell())
543
544 - def _load_block(self, start_offset=None):
545 if start_offset is None: 546 #If the file is being read sequentially, then _handle.tell() 547 #should be pointing at the start of the next block. 548 #However, if seek has been used, we can't assume that. 549 start_offset = self._block_start_offset + self._block_raw_length 550 if start_offset == self._block_start_offset: 551 self._within_block_offset = 0 552 return 553 elif start_offset in self._buffers: 554 #Already in cache 555 self._buffer, self._block_raw_length = self._buffers[start_offset] 556 self._within_block_offset = 0 557 self._block_start_offset = start_offset 558 return 559 #Must hit the disk... first check cache limits, 560 while len(self._buffers) >= self.max_cache: 561 #TODO - Implemente LRU cache removal? 562 self._buffers.popitem() 563 #Now load the block 564 handle = self._handle 565 if start_offset is not None: 566 handle.seek(start_offset) 567 self._block_start_offset = handle.tell() 568 try: 569 block_size, self._buffer = _load_bgzf_block(handle, self._text) 570 except StopIteration: 571 #EOF 572 block_size = 0 573 if self._text: 574 self._buffer = "" 575 else: 576 self._buffer = b"" 577 self._within_block_offset = 0 578 self._block_raw_length = block_size 579 #Finally save the block in our cache, 580 self._buffers[self._block_start_offset] = self._buffer, block_size
581
582 - def tell(self):
583 """Returns a 64-bit unsigned BGZF virtual offset.""" 584 if 0 < self._within_block_offset == len(self._buffer): 585 #Special case where we're right at the end of a (non empty) block. 586 #For non-maximal blocks could give two possible virtual offsets, 587 #but for a maximal block can't use 65536 as the within block 588 #offset. Therefore for consistency, use the next block and a 589 #within block offset of zero. 590 return (self._block_start_offset + self._block_raw_length) << 16 591 else: 592 #return make_virtual_offset(self._block_start_offset, 593 # self._within_block_offset) 594 #TODO - Include bounds checking as in make_virtual_offset? 595 return (self._block_start_offset<<16) | self._within_block_offset
596
597 - def seek(self, virtual_offset):
598 """Seek to a 64-bit unsigned BGZF virtual offset.""" 599 #Do this inline to avoid a function call, 600 #start_offset, within_block = split_virtual_offset(virtual_offset) 601 start_offset = virtual_offset>>16 602 within_block = virtual_offset ^ (start_offset<<16) 603 if start_offset != self._block_start_offset: 604 #Don't need to load the block if already there 605 #(this avoids a function call since _load_block would do nothing) 606 self._load_block(start_offset) 607 assert start_offset == self._block_start_offset 608 if within_block > len(self._buffer) \ 609 and not (within_block == 0 and len(self._buffer)==0): 610 raise ValueError("Within offset %i but block size only %i" 611 % (within_block, len(self._buffer))) 612 self._within_block_offset = within_block 613 #assert virtual_offset == self.tell(), \ 614 # "Did seek to %i (%i, %i), but tell says %i (%i, %i)" \ 615 # % (virtual_offset, start_offset, within_block, 616 # self.tell(), self._block_start_offset, self._within_block_offset) 617 return virtual_offset
618
619 - def read(self, size=-1):
620 if size < 0: 621 raise NotImplementedError("Don't be greedy, that could be massive!") 622 elif size == 0: 623 if self._text: 624 return "" 625 else: 626 return b"" 627 elif self._within_block_offset + size <= len(self._buffer): 628 #This may leave us right at the end of a block 629 #(lazy loading, don't load the next block unless we have too) 630 data = self._buffer[self._within_block_offset:self._within_block_offset + size] 631 self._within_block_offset += size 632 assert data # Must be at least 1 byte 633 return data 634 else: 635 data = self._buffer[self._within_block_offset:] 636 size -= len(data) 637 self._load_block() # will reset offsets 638 #TODO - Test with corner case of an empty block followed by 639 #a non-empty block 640 if not self._buffer: 641 return data # EOF 642 elif size: 643 #TODO - Avoid recursion 644 return data + self.read(size) 645 else: 646 #Only needed the end of the last block 647 return data
648
649 - def readline(self):
650 i = self._buffer.find(self._newline, self._within_block_offset) 651 #Three cases to consider, 652 if i==-1: 653 #No newline, need to read in more data 654 data = self._buffer[self._within_block_offset:] 655 self._load_block() # will reset offsets 656 if not self._buffer: 657 return data # EOF 658 else: 659 #TODO - Avoid recursion 660 return data + self.readline() 661 elif i + 1 == len(self._buffer): 662 #Found new line, but right at end of block (SPECIAL) 663 data = self._buffer[self._within_block_offset:] 664 #Must now load the next block to ensure tell() works 665 self._load_block() # will reset offsets 666 assert data 667 return data 668 else: 669 #Found new line, not at end of block (easy case, no IO) 670 data = self._buffer[self._within_block_offset:i+1] 671 self._within_block_offset = i + 1 672 #assert data.endswith(self._newline) 673 return data
674
675 - def __next__(self):
676 line = self.readline() 677 if not line: 678 raise StopIteration 679 return line
680 681 if sys.version_info[0] < 3:
682 - def next(self):
683 """Python 2 style alias for Python 3 style __next__ method.""" 684 return self.__next__()
685
686 - def __iter__(self):
687 return self
688
689 - def close(self):
690 self._handle.close() 691 self._buffer = None 692 self._block_start_offset = None 693 self._buffers = None
694
695 - def seekable(self):
696 return True
697
698 - def isatty(self):
699 return False
700
701 - def fileno(self):
702 return self._handle.fileno()
703 704
705 -class BgzfWriter(object):
706
707 - def __init__(self, filename=None, mode="w", fileobj=None, compresslevel=6):
708 if fileobj: 709 assert filename is None 710 handle = fileobj 711 else: 712 if "w" not in mode.lower() \ 713 and "a" not in mode.lower(): 714 raise ValueError("Must use write or append mode, not %r" % mode) 715 if "a" in mode.lower(): 716 handle = _open(filename, "ab") 717 else: 718 handle = _open(filename, "wb") 719 self._text = "b" not in mode.lower() 720 self._handle = handle 721 self._buffer = b"" 722 self.compresslevel = compresslevel
723
724 - def _write_block(self, block):
725 #print("Saving %i bytes" % len(block)) 726 start_offset = self._handle.tell() 727 assert len(block) <= 65536 728 #Giving a negative window bits means no gzip/zlib headers, -15 used in samtools 729 c = zlib.compressobj(self.compresslevel, 730 zlib.DEFLATED, 731 -15, 732 zlib.DEF_MEM_LEVEL, 733 0) 734 compressed = c.compress(block) + c.flush() 735 del c 736 assert len(compressed) < 65536, "TODO - Didn't compress enough, try less data in this block" 737 crc = zlib.crc32(block) 738 #Should cope with a mix of Python platforms... 739 if crc < 0: 740 crc = struct.pack("<i", crc) 741 else: 742 crc = struct.pack("<I", crc) 743 bsize = struct.pack("<H", len(compressed)+25) # includes -1 744 crc = struct.pack("<I", zlib.crc32(block) & 0xffffffff) 745 uncompressed_length = struct.pack("<I", len(block)) 746 #Fixed 16 bytes, 747 # gzip magic bytes (4) mod time (4), 748 # gzip flag (1), os (1), extra length which is six (2), 749 # sub field which is BC (2), sub field length of two (2), 750 #Variable data, 751 #2 bytes: block length as BC sub field (2) 752 #X bytes: the data 753 #8 bytes: crc (4), uncompressed data length (4) 754 data = _bgzf_header + bsize + compressed + crc + uncompressed_length 755 self._handle.write(data)
756
757 - def write(self, data):
758 #TODO - Check bytes vs unicode 759 data = _as_bytes(data) 760 #block_size = 2**16 = 65536 761 data_len = len(data) 762 if len(self._buffer) + data_len < 65536: 763 #print("Cached %r" % data) 764 self._buffer += data 765 return 766 else: 767 #print("Got %r, writing out some data..." % data) 768 self._buffer += data 769 while len(self._buffer) >= 65536: 770 self._write_block(self._buffer[:65536]) 771 self._buffer = self._buffer[65536:]
772
773 - def flush(self):
774 while len(self._buffer) >= 65536: 775 self._write_block(self._buffer[:65535]) 776 self._buffer = self._buffer[65535:] 777 self._write_block(self._buffer) 778 self._buffer = b"" 779 self._handle.flush()
780
781 - def close(self):
782 """Flush data, write 28 bytes empty BGZF EOF marker, and close the BGZF file.""" 783 if self._buffer: 784 self.flush() 785 #samtools will look for a magic EOF marker, just a 28 byte empty BGZF block, 786 #and if it is missing warns the BAM file may be truncated. In addition to 787 #samtools writing this block, so too does bgzip - so we should too. 788 self._handle.write(_bgzf_eof) 789 self._handle.flush() 790 self._handle.close()
791
792 - def tell(self):
793 """Returns a BGZF 64-bit virtual offset.""" 794 return make_virtual_offset(self._handle.tell(), len(self._buffer))
795
796 - def seekable(self):
797 #Not seekable, but we do support tell... 798 return False
799
800 - def isatty(self):
801 return False
802
803 - def fileno(self):
804 return self._handle.fileno()
805 806 807 if __name__ == "__main__": 808 import sys 809 if len(sys.argv) > 1: 810 print("Call this with no arguments and pipe uncompressed data in on stdin") 811 print("and it will produce BGZF compressed data on stdout. e.g.") 812 print("") 813 print("./bgzf.py < example.fastq > example.fastq.bgz") 814 print("") 815 print("The extension convention of *.bgz is to distinugish these from *.gz") 816 print("used for standard gzipped files without the block structure of BGZF.") 817 print("You can use the standard gunzip command to decompress BGZF files,") 818 print("if it complains about the extension try something like this:") 819 print("") 820 print("cat example.fastq.bgz | gunzip > example.fastq") 821 print("") 822 print("See also the tool bgzip that comes with samtools") 823 sys.exit(0) 824 825 sys.stderr.write("Producing BGZF output from stdin...\n") 826 w = BgzfWriter(fileobj=sys.stdout) 827 while True: 828 data = sys.stdin.read(65536) 829 w.write(data) 830 if not data: 831 break 832 #Doing close with write an empty BGZF block as EOF marker: 833 w.close() 834 sys.stderr.write("BGZF data produced\n") 835