use Biber; my $biber = Biber->new(); $biber->parse_ctrlfile("example.bcf"); $biber->prepare;
Initialize the Biber object, optionally passing named options as arguments.
Output summary of warnings/errors/misc before exit
Returns a File::Temp directory object for use in various things
Returns the directory name of the File::Temp directory object
my $sections= $biber->sections Returns a Biber::Sections object describing the bibliography sections
Adds a Biber::Sections object. Used externally from, e.g. biber
my $datalists = $biber->datalists Returns a Biber::DataLists object describing the bibliography sorting lists
Returns a Biber::LangTags object containing a parser for BCP47 tags
Sets the object used to output final results Must be a subclass of Biber::Output::base
Returns the current preamble as an array ref
Returns the object used to output final results
Sets the current section number that we are working on to a section number
Gets the current section number that we are working on
Fakes parts of the control file for tool mode
This method reads the control file generated by biblatex to work out the various biblatex options. See Constants.pm for defaults and example of the data structure being built here.
Place to put misc pre-processing things needed later
Place to put misc pre-processing things needed later for tool mode
Resolve aliases in xref/crossref/xdata which take keys as values to their real keys We use set_datafield as we are overriding the alias in the datasource
Remove citekey aliases from citekeys as they don't point to real entries.
This instantiates any dynamic entries so that they are available for processing later on. This has to be done before most all other processing so that when we call $section->bibentry($key), as we do many times in the code, we don't die because there is a key but no Entry object.
Resolve xdata
Promotes set member to cited status
$biber->preprocess_sets This records the set information for use later
$biber->process_interentry This does several things: 1. Ensures proper inheritance of data from cross-references. 2. Ensures that crossrefs/xrefs that are directly cited or cross-referenced at least mincrossrefs/minxrefs times are included in the bibliography.
Validate bib data according to a datamodel Note that we are validating the internal Biber::Entries after they have been created from the datasources so this is datasource neutral, as it should be. It is here to enforce adherence to what biblatex expects.
Generate name strings and disambiguation schema. Has to be in the context of a data list (reference context) because uniquenametemplate can be specified per-list/context
Adds required per-entry options etc. to sets
Processing of entries which is not list-specific and which can therefore insert data directly into entries
Main processing operations, to generate metadata and entry information This method is automatically called by C<prepare>. Runs prior to uniqueness processing
More processing operations, to generate things which require uniqueness information like namehash Runs after uniqueness processing
Final processing operations which depend on all previous processing
Track seen primary author base names for generation of uniqueprimaryauthor
Track seen work combination for generation of singletitle, uniquetitle, uniquebaretitle and uniquework
Track labelname/date parts combination for generation of extradate
Track labelname only for generation of extraname
Track labelname/labeltitle combination for generation of extratitle
Track labeltitle/labelyear combination for generation of extratitleyear
Postprocess set entries Checks for common set errors and enforces "dataonly" options for set members. It's not necessary to set skipbib, skipbiblist in the OPTIONS field for the set members as these are automatically set by biblatex due to the \inset
Generate nocite information
Generate labelname information.
Generate labeldate information, including times
Generate labeltitle Note that this is not conditionalised on the biblatex "labeltitle" as labeltitle should always be output since all standard styles need it. Only extratitle is conditionalised on the biblatex "labeltitle" option.
Generate fullhash
Generate namehash
Generate per_name_hashes
Generate the visible name information. This is used in various places and it is useful to have it generated in one place.
Generate the labelalpha and also the variant for sorting
Generate the extraalpha information
Put presort fields for an entry into the main Biber bltx state so that it is all available in the same place since this can be set per-type and globally too.
Process a bibliography list
Run an entry through a list filter. Returns a boolean.
Generate sort data schema for Sort::Key from sort spec like this: spec => [ [undef, { presort => {} }], [{ final => 1 }, { sortkey => {} }], [ {'sort_direction' => 'descending'}, { sortname => {} }, { author => {} }, { editor => {} }, { translator => {} }, { sorttitle => {} }, { title => {} }, ], [undef, { sortyear => {} }, { year => {} }], [undef, { sorttitle => {} }, { title => {} }], [undef, { volume => {} }, { "0000" => {} }], ],
Generate information for sorting
Generate the uniqueness information needed when creating .bbl
Gather the uniquename information as we look through the names What is happening in here is the following: We are registering the number of occurrences of each name, name+init and fullname within a specific context. For example, the context is "global" with uniquename < mininit and "name list" for uniquename=mininit or minfull. The keys we store to count this are the most specific information for the context, so, for uniquename < mininit, this is the full name and for uniquename=mininit or minfull, this is the complete list of full names. These keys have values in a hash which are ignored. They serve only to accumulate repeated occurrences with the context and we don't care about this and so the values are a useful sinkhole for such repetition. For example, if we find in the global context a base name "Smith" in two different entries under the same form "Alan Smith", the data structure will look like: {Smith}->{global}->{Alan Smith} = 2 We don't care about the value as this means that there are 2 "Alan Smith"s in the global context which need disambiguating identically anyway. So, we just count the keys for the base name "Smith" in the global context to see how ambiguous the base name itself is. This would be "1" and so "Alan Smith" would get uniquename=false because it's unambiguous as just "Smith". The same goes for "minimal" list context disambiguation for uniquename=mininit or minfull. For example, if we had the base name "Smith" to disambiguate in two entries with labelname "John Smith and Alan Jones", the data structure would look like: {Smith}->{Smith+Jones}->{John Smith+Alan Jones} = 2 Again, counting the keys of the context for the base name gives us "1" which means we have uniquename=false for "John Smith" in both entries because it's the same list. This also works for repeated names in the same list "John Smith and Bert Smith". Disambiguating "Smith" in this: {Smith}->{Smith+Smith}->{John Smith+Bert Smith} = 2 So both "John Smith" and "Bert Smith" in this entry get uniquename=false (of course, as long as there are no other "X Smith and Y Smith" entries where X != "John" or Y != "Bert"). The values from biblatex.sty: false = 0 init = 1 true = 2 full = 2 allinit = 3 allfull = 4 mininit = 5 minfull = 6
Generate the per-name uniquename values using the information harvested by create_uniquename_info()
Gather the uniquelist information as we look through the names
Generate the per-namelist uniquelist values using the information harvested by create_uniquelist_info()
Generate information for data which may changes per datalist
Generate the singletitle field, if requested. The information for generating this is gathered in process_workuniqueness()
Generate the uniquetitle field, if requested. The information for generating this is gathered in process_workuniqueness()
Generate the uniquebaretitle field, if requested. The information for generating this is gathered in process_workuniqueness()
Generate the uniquework field, if requested. The information for generating this is gathered in process_workuniqueness()
Generate the uniqueprimaryauthor field, if requested. The information for generating this is gathered in create_uniquename_info()
Sort a list using information in entries according to a certain sorting template. Use a flag to skip info messages on first pass
Preprocessing for options. Used primarily to perform process-intensive operations which can be done once instead of inside dense loops later.
Do the main work. Process and sort all entries before writing the output.
Do the main work for tool mode
Fetch citekey and dependents data from section datasources Expects to find datasource packages named: Biber::Input::<type>::<datatype> and one defined subroutine called: Biber::Input::<type>::<datatype>::extract_entries which takes args: 1: Biber object 2: Datasource name 3: Reference to an array of cite keys to look for and returns an array of the cite keys it did not find in the datasource
Get dependents of the entries for a given list of citekeys. Is called recursively until there are no more dependents to look for.
Remove undefined dependent keys from an entry using a map of dependent keys to entries
Convenience sub to parse a .bcf sorting section and return nice sorting object
Dump the biber object with Data::Dump for debugging
This module is free software. You can redistribute it and/or modify it under the terms of the Artistic License 2.0.
This program is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.