code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
def printClassTree(self, element=None, showids=False, labels=False, showtype=False): """ Print nicely into stdout the class tree of an ontology Note: indentation is made so that ids up to 3 digits fit in, plus a space. [123]1-- [1]123-- [12]12-- """ TYPE_...
Print nicely into stdout the class tree of an ontology Note: indentation is made so that ids up to 3 digits fit in, plus a space. [123]1-- [1]123-- [12]12--
def _priority_key(pep8_result): """Key for sorting PEP8 results. Global fixes should be done first. This is important for things like indentation. """ priority = [ # Fix multiline colon-based before semicolon based. 'e701', # Break multiline statements early. 'e702'...
Key for sorting PEP8 results. Global fixes should be done first. This is important for things like indentation.
def create(self, unique_name=values.unset, friendly_name=values.unset, identity=values.unset, deployment_sid=values.unset, enabled=values.unset): """ Create a new DeviceInstance :param unicode unique_name: A unique, addressable name of this Device. :param u...
Create a new DeviceInstance :param unicode unique_name: A unique, addressable name of this Device. :param unicode friendly_name: A human readable description for this Device. :param unicode identity: An identifier of the Device user. :param unicode deployment_sid: The unique SID of the ...
def load_key_bindings_for_prompt(**kw): """ Create a ``Registry`` object with the defaults key bindings for an input prompt. This activates the key bindings for abort/exit (Ctrl-C/Ctrl-D), incremental search and auto suggestions. (Not for full screen applications.) """ kw.setdefault('e...
Create a ``Registry`` object with the defaults key bindings for an input prompt. This activates the key bindings for abort/exit (Ctrl-C/Ctrl-D), incremental search and auto suggestions. (Not for full screen applications.)
def run_nested_groups(): """Run the nested groups example. This example shows a PhaseGroup in a PhaseGroup. No phase is terminal, so all are run in the order; main_phase inner_main_phase inner_teardown_phase teardown_phase """ test = htf.Test( htf.PhaseGroup( main=[ ...
Run the nested groups example. This example shows a PhaseGroup in a PhaseGroup. No phase is terminal, so all are run in the order; main_phase inner_main_phase inner_teardown_phase teardown_phase
def delete_webhook(self, scaling_group, policy, webhook): """ Deletes the specified webhook from the specified policy. """ uri = "/%s/%s/policies/%s/webhooks/%s" % (self.uri_base, utils.get_id(scaling_group), utils.get_id(policy), utils.get_id(webhook)) ...
Deletes the specified webhook from the specified policy.
def validate_overwrite_different_input_output(opts): """ Make sure that if overwrite is set to False, the input and output folders are not set to the same location. :param opts: a namespace containing the attributes 'overwrite', 'input', and 'output' :raises ValidationException: if 'input' ...
Make sure that if overwrite is set to False, the input and output folders are not set to the same location. :param opts: a namespace containing the attributes 'overwrite', 'input', and 'output' :raises ValidationException: if 'input' and 'output' point to the same directory and 'overwrite' ...
def api_call(method, end_point, params=None, client_id=None, access_token=None): """Call given API end_point with API keys. :param method: HTTP method (e.g. 'get', 'delete'). :param end_point: API endpoint (e.g. 'users/john/sets'). :param params: Dictionary to be sent in the query string (e.g. {'myparam...
Call given API end_point with API keys. :param method: HTTP method (e.g. 'get', 'delete'). :param end_point: API endpoint (e.g. 'users/john/sets'). :param params: Dictionary to be sent in the query string (e.g. {'myparam': 'myval'}) :param client_id: Quizlet client ID as string. :param access_token:...
def pformat(self, prefix=()): ''' Makes a pretty ASCII format of the data, suitable for displaying in a console or saving to a text file. Returns a list of lines. ''' nan = float("nan") def sformat(segment, stat): FMT = "n={0}, mean={1}, p50/...
Makes a pretty ASCII format of the data, suitable for displaying in a console or saving to a text file. Returns a list of lines.
def connect_to_region(region_name): """ Establish connection to AWS API. """ logging.debug("Connecting to AWS region '%s'" % region_name) con = boto.vpc.connect_to_region(region_name) if not con: raise VpcRouteSetError("Could not establish connection to " ...
Establish connection to AWS API.
async def post_heartbeat(self, msg, _context): """Update the status of a service.""" name = msg.get('name') await self.service_manager.send_heartbeat(name)
Update the status of a service.
def get_published_courses_in_account(self, account_id, params={}): """ Return a list of published courses for the passed account ID. """ params["published"] = True return self.get_courses_in_account(account_id, params)
Return a list of published courses for the passed account ID.
def log(self, msg, level=INFO): """Record a line of log in logger :param str msg: content of the messag :param level: logging level :return: None """ logger.log(level, '<{}> - '.format(self._name) + msg)
Record a line of log in logger :param str msg: content of the messag :param level: logging level :return: None
def noisy_layer(self, prefix, action_in, out_size, sigma0, non_linear=True): """ a common dense layer: y = w^{T}x + b a noisy layer: y = (w + \epsilon_w*\sigma_w)^{T}x + (b+\epsilon_b*\sigma_b) where \epsilon are random variables sampled from factorized no...
a common dense layer: y = w^{T}x + b a noisy layer: y = (w + \epsilon_w*\sigma_w)^{T}x + (b+\epsilon_b*\sigma_b) where \epsilon are random variables sampled from factorized normal distributions and \sigma are trainable variables which are expected to vanish along the training...
def printhtml(csvdiffs): """print the html""" soup = BeautifulSoup() html = Tag(soup, name="html") para1 = Tag(soup, name="p") para1.append(csvdiffs[0][0]) para2 = Tag(soup, name="p") para2.append(csvdiffs[1][0]) table = Tag(soup, name="table") table.attrs.update(dict(border="1")) ...
print the html
def entropy(string): """Calculate the entropy of a string.""" entropy = 0 for number in range(256): result = float(string.encode('utf-8').count( chr(number))) / len(string.encode('utf-8')) if result != 0: entropy = entropy - result * math.log(result, 2) return ent...
Calculate the entropy of a string.
def quic_graph_lasso_cv(X, metric): """Run QuicGraphicalLassoCV on data with metric of choice. Compare results with GridSearchCV + quic_graph_lasso. The number of lambdas tested should be much lower with similar final lam_ selected. """ print("QuicGraphicalLassoCV with:") print(" metric: {}"...
Run QuicGraphicalLassoCV on data with metric of choice. Compare results with GridSearchCV + quic_graph_lasso. The number of lambdas tested should be much lower with similar final lam_ selected.
def param_errors(self, pnames=None): """ Return an array with the parameter errors Parameters ---------- pname : list of string or none If a list of strings, get the Parameter objects with those names If none, get all the Parameter objects Returns ...
Return an array with the parameter errors Parameters ---------- pname : list of string or none If a list of strings, get the Parameter objects with those names If none, get all the Parameter objects Returns ------- ~numpy.array of parameter errors...
def _get_error_context(input_, token): """ Build a context string that defines where on the line the defined error occurs. This consists of the characters ^ at the position and for the length defined by the lexer position and token length """ try: line = input_[token.lexpos...
Build a context string that defines where on the line the defined error occurs. This consists of the characters ^ at the position and for the length defined by the lexer position and token length
def get_worksheet_keys(data_dict, result_info_key): """Gets sorted keys from the dict, ignoring result_info_key and 'meta' key Args: data_dict: dict to pull keys from Returns: list of keys in the dict other than the result_info_key """ keys = set(data_dict.keys()) keys.remove(re...
Gets sorted keys from the dict, ignoring result_info_key and 'meta' key Args: data_dict: dict to pull keys from Returns: list of keys in the dict other than the result_info_key
def start(): ''' Start the server loop ''' from . import app root, apiopts, conf = app.get_app(__opts__) if not apiopts.get('disable_ssl', False): if 'ssl_crt' not in apiopts or 'ssl_key' not in apiopts: logger.error("Not starting '%s'. Options 'ssl_crt' and " ...
Start the server loop
def compute_group_count(self, pattern): """Compute the number of regexp match groups when the pattern is provided to the :func:`Cardinality.make_pattern()` method. :param pattern: Item regexp pattern (as string). :return: Number of regexp match groups in the cardinality pattern. ...
Compute the number of regexp match groups when the pattern is provided to the :func:`Cardinality.make_pattern()` method. :param pattern: Item regexp pattern (as string). :return: Number of regexp match groups in the cardinality pattern.
def create_network(self, name, driver=None, options=None, ipam=None, check_duplicate=None, internal=False, labels=None, enable_ipv6=False, attachable=None, scope=None, ingress=None): """ Create a network. Similar to the ``docker networ...
Create a network. Similar to the ``docker network create``. Args: name (str): Name of the network driver (str): Name of the driver used to create the network options (dict): Driver options as a key-value dictionary ipam (IPAMConfig): Optional custom IP scheme for...
def starting_expression(source_code, offset): """Return the expression to complete""" word_finder = worder.Worder(source_code, True) expression, starting, starting_offset = \ word_finder.get_splitted_primary_before(offset) if expression: return expression + '.' + starting return star...
Return the expression to complete
def get_avg_price_stat(self) -> Decimal: """ Calculates the statistical average price for the security, by averaging only the prices paid. Very simple first implementation. """ avg_price = Decimal(0) price_total = Decimal(0) price_count = 0 for account i...
Calculates the statistical average price for the security, by averaging only the prices paid. Very simple first implementation.
def get_current_instruction(self) -> Dict: """Gets the current instruction for this GlobalState. :return: """ instructions = self.environment.code.instruction_list return instructions[self.mstate.pc]
Gets the current instruction for this GlobalState. :return:
def insert_contribution_entries(database, entries): """Insert a set of records of a contribution report in the provided database. Insert a set of new records into the provided database without checking for conflicting entries. @param database: The MongoDB database to operate on. The contributions ...
Insert a set of records of a contribution report in the provided database. Insert a set of new records into the provided database without checking for conflicting entries. @param database: The MongoDB database to operate on. The contributions collection will be used from this database. @type d...
def read_rle(file_obj, header, bit_width, debug_logging): """Read a run-length encoded run from the given fo with the given header and bit_width. The count is determined from the header and the width is used to grab the value that's repeated. Yields the value repeated count times. """ count = heade...
Read a run-length encoded run from the given fo with the given header and bit_width. The count is determined from the header and the width is used to grab the value that's repeated. Yields the value repeated count times.
def read(self): """Reads the cache file as pickle file.""" def warn(msg, elapsed_time, current_time): desc = self._cache_id_desc() self._warnings( "{0} {1}: {2}s < {3}s", msg, desc, elapsed_time, current_time) file_time = get_time() out = self._o...
Reads the cache file as pickle file.
def discard_observer(self, observer): """Un-register an observer. Args: observer: The observer to un-register. Returns true if an observer was removed, otherwise False. """ discarded = False key = self.make_key(observer) if key in self.observers: ...
Un-register an observer. Args: observer: The observer to un-register. Returns true if an observer was removed, otherwise False.
def visit_Call(self, nodeCall): """ Be invoked when visiting a node of function call. @param node: currently visiting node """ super(PatternFinder, self).generic_visit(nodeCall) # Capture assignment like 'f = getattr(...)'. if hasattr(nodeCall.func, "func"): ...
Be invoked when visiting a node of function call. @param node: currently visiting node
def clear(self): ''' Remove all content from the document but do not reset title. Returns: None ''' self._push_all_models_freeze() try: while len(self._roots) > 0: r = next(iter(self._roots)) self.remove_root(r) fi...
Remove all content from the document but do not reset title. Returns: None
def get_creators(self, *args, **kwargs): """ Returns a full CreatorDataWrapper object for this story. /stories/{storyId}/creators :returns: CreatorDataWrapper -- A new request to API. Contains full results set. """ from .creator import Creator, CreatorDataWrapper ...
Returns a full CreatorDataWrapper object for this story. /stories/{storyId}/creators :returns: CreatorDataWrapper -- A new request to API. Contains full results set.
def get_matlab_value(val): """ Extract a value from a Matlab file From the oct2py project, see https://pythonhosted.org/oct2py/conversions.html """ import numpy as np # Extract each item of a list. if isinstance(val, list): return [get_matlab_value(v) for v in val] # Ignor...
Extract a value from a Matlab file From the oct2py project, see https://pythonhosted.org/oct2py/conversions.html
def weld_variance(array, weld_type): """Returns the variance of the array. Parameters ---------- array : numpy.ndarray or WeldObject Input array. weld_type : WeldType Type of each element in the input array. Returns ------- WeldObject Representation of this comp...
Returns the variance of the array. Parameters ---------- array : numpy.ndarray or WeldObject Input array. weld_type : WeldType Type of each element in the input array. Returns ------- WeldObject Representation of this computation.
def new_output_file_opt(self, opt, name): """ Add an option and return a new file handle """ fil = File(name) self.add_output_opt(opt, fil) return fil
Add an option and return a new file handle
def walk_directory_directories_relative_path(self, relativePath=""): """ Walk a certain directory in repository and yield all found directories relative path. :parameters: #. relativePath (str): The relative path of the directory. """ # get directory info dict ...
Walk a certain directory in repository and yield all found directories relative path. :parameters: #. relativePath (str): The relative path of the directory.
def bow(self, tokens, remove_oov=False): """ Create a bow representation of a list of tokens. Parameters ---------- tokens : list. The list of items to change into a bag of words representation. remove_oov : bool. Whether to remove OOV items from ...
Create a bow representation of a list of tokens. Parameters ---------- tokens : list. The list of items to change into a bag of words representation. remove_oov : bool. Whether to remove OOV items from the input. If this is True, the length of the ret...
def _set_properties(self): """Setup title and label""" self.SetTitle(_("About pyspread")) label = _("pyspread {version}\nCopyright Martin Manns") label = label.format(version=VERSION) self.about_label.SetLabel(label)
Setup title and label
def draw_no_data(self): """Write the no data text to the svg""" no_data = self.node( self.graph.nodes['text_overlay'], 'text', x=self.graph.view.width / 2, y=self.graph.view.height / 2, class_='no_data' ) no_data.text = self.gra...
Write the no data text to the svg
def Validate(self, problems=default_problem_reporter): """Validate attribute values and this object's internal consistency. Returns: True iff all validation checks passed. """ found_problem = False found_problem = ((not util.ValidateRequiredFieldsAreNotEmpty( self, s...
Validate attribute values and this object's internal consistency. Returns: True iff all validation checks passed.
def get_child_by_name(self, childname): """Get a child node of the current instance by its name. :param childname: the name of the required child node. :type childname: str :returns: the first child node found with name `childname`. :rtype: Node or None """ _chil...
Get a child node of the current instance by its name. :param childname: the name of the required child node. :type childname: str :returns: the first child node found with name `childname`. :rtype: Node or None
def get_output_margin(self, status=None): """Get the output margin (number of rows for the prompt, footer and timing message.""" margin = self.get_reserved_space() + self.get_prompt(self.prompt).count('\n') + 1 if special.is_timing_enabled(): margin += 1 if status: ...
Get the output margin (number of rows for the prompt, footer and timing message.
def _growth_curve_pooling_group(self, distr='glo', as_rural=False): """ Return flood growth curve function based on `amax_records` from a pooling group. :return: Inverse cumulative distribution function with one parameter `aep` (annual exceedance probability) :type: :class:`.GrowthCurve...
Return flood growth curve function based on `amax_records` from a pooling group. :return: Inverse cumulative distribution function with one parameter `aep` (annual exceedance probability) :type: :class:`.GrowthCurve` :param as_rural: assume catchment is fully rural. Default: false. :typ...
def _get_connection(self, handle, expect_state=None): """Get a connection object, logging an error if its in an unexpected state """ conndata = self._connections.get(handle) if conndata and expect_state is not None and conndata['state'] != expect_state: self._logger.error("...
Get a connection object, logging an error if its in an unexpected state
def _get_rescale_factors(self, reference_shape, meta_info): """ Compute the resampling factor for height and width of the input array :param reference_shape: Tuple specifying height and width in pixels of high-resolution array :type reference_shape: tuple of ints :param meta_info: Meta-...
Compute the resampling factor for height and width of the input array :param reference_shape: Tuple specifying height and width in pixels of high-resolution array :type reference_shape: tuple of ints :param meta_info: Meta-info dictionary of input eopatch. Defines OGC request and parameters use...
def logprob(self, actions, action_logits): """ Logarithm of probability of given sample """ neg_log_prob = F.nll_loss(action_logits, actions, reduction='none') return -neg_log_prob
Logarithm of probability of given sample
def create_token(key, payload): """Auth token generator payload should be a json encodable data structure """ token = hmac.new(key) token.update(json.dumps(payload)) return token.hexdigest()
Auth token generator payload should be a json encodable data structure
def setting(key, default=None, expected_type=None, qsettings=None): """Helper function to get a value from settings under InaSAFE scope. :param key: Unique key for setting. :type key: basestring :param default: The default value in case of the key is not found or there is an error. :type d...
Helper function to get a value from settings under InaSAFE scope. :param key: Unique key for setting. :type key: basestring :param default: The default value in case of the key is not found or there is an error. :type default: basestring, None, boolean, int, float :param expected_type: Th...
def complete(self): """Task is complete if completion marker is set and all requirements are complete """ is_complete = super(ORMWrapperTask, self).complete() for req in self.requires(): is_complete &= req.complete() return is_complete
Task is complete if completion marker is set and all requirements are complete
def read_pid_constants(self): """Reads back the PID constants stored on the Grizzly.""" p = self._read_as_int(Addr.PConstant, 4) i = self._read_as_int(Addr.IConstant, 4) d = self._read_as_int(Addr.DConstant, 4) return map(lambda x: x / (2 ** 16), (p, i, d))
Reads back the PID constants stored on the Grizzly.
def get_game_logs(self): """Returns team game logs as a pandas DataFrame""" logs = self.response.json()['resultSets'][0]['rowSet'] headers = self.response.json()['resultSets'][0]['headers'] df = pd.DataFrame(logs, columns=headers) df.GAME_DATE = pd.to_datetime(df.GAME_DATE) ...
Returns team game logs as a pandas DataFrame
def read_azimuth_noise_array(elts): """Read the azimuth noise vectors. The azimuth noise is normalized per swath to account for gain differences between the swaths in EW mode. This is based on the this reference: J. Park, A. A. Korosov, M. Babiker, S. Sandven and J. Won, ...
Read the azimuth noise vectors. The azimuth noise is normalized per swath to account for gain differences between the swaths in EW mode. This is based on the this reference: J. Park, A. A. Korosov, M. Babiker, S. Sandven and J. Won, "Efficient Thermal Noise Removal for Sentinel...
def cookie_dump(key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False): """ :rtype: ``Cookie.SimpleCookie`` """ cookie = SimpleCookie() cookie[key] = value for attr in ('max_age', 'expires', 'path', 'domain', 'secure', 'ht...
:rtype: ``Cookie.SimpleCookie``
def tas2eas(Vtas, H): """True Airspeed to Equivalent Airspeed""" rho = density(H) Veas = Vtas * np.sqrt(rho/rho0) return Veas
True Airspeed to Equivalent Airspeed
def lstm_seq2seq_internal_bid_encoder(inputs, targets, hparams, train): """The basic LSTM seq2seq model with bidirectional encoder.""" with tf.variable_scope("lstm_seq2seq_bid_encoder"): if inputs is not None: inputs_length = common_layers.length_from_embedding(inputs) # Flatten inputs. inputs...
The basic LSTM seq2seq model with bidirectional encoder.
def fragment6(pkt, fragSize): """ Performs fragmentation of an IPv6 packet. Provided packet ('pkt') must already contain an IPv6ExtHdrFragment() class. 'fragSize' argument is the expected maximum size of fragments (MTU). The list of packets is returned. If packet does not contain an IPv6ExtHdrFragm...
Performs fragmentation of an IPv6 packet. Provided packet ('pkt') must already contain an IPv6ExtHdrFragment() class. 'fragSize' argument is the expected maximum size of fragments (MTU). The list of packets is returned. If packet does not contain an IPv6ExtHdrFragment class, it is returned in result li...
def prepare_files(self): """Get files from data dump.""" # Prepare files files = {} for f in self.data['files']: k = f['full_name'] if k not in files: files[k] = [] files[k].append(f) # Sort versions for k in files.keys...
Get files from data dump.
def get_task_param_string(task): """Get all parameters of a task as one string Returns: str: task parameter string """ # get dict str -> str from luigi param_dict = task.to_str_params() # sort keys, serialize items = [] for key in sorted(param_dict.keys()): items.append...
Get all parameters of a task as one string Returns: str: task parameter string
def get_all_invoice_payments(self, params=None): """ Get all invoice payments This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list ...
Get all invoice payments This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list
def xlsx_to_csv(self, infile, worksheet=0, delimiter=","): """ Convert xlsx to easier format first, since we want to use the convenience of the CSV library """ wb = load_workbook(self.getInputFile()) sheet = wb.worksheets[worksheet] buffer = StringIO() # extract ...
Convert xlsx to easier format first, since we want to use the convenience of the CSV library
def _set_callables(modules): ''' Set all Ansible modules callables :return: ''' def _set_function(cmd_name, doc): ''' Create a Salt function for the Ansible module. ''' def _cmd(*args, **kw): ''' Call an Ansible module as a function from the Sa...
Set all Ansible modules callables :return:
def split_obj (obj, prefix = None): ''' Split the object, returning a 3-tuple with the flat object, optionally followed by the key for the subobjects and a list of those subobjects. ''' # copy the object, optionally add the prefix before each key new = obj.copy() if prefix is None else { '{}_{}...
Split the object, returning a 3-tuple with the flat object, optionally followed by the key for the subobjects and a list of those subobjects.
def _sum_wrapper(fn): """ Wrapper to perform row-wise aggregation of list arguments and pass them to a function. The return value of the function is summed over the argument groups. Non-list arguments will be automatically cast to a list. """ def wrapper(*args, **kwargs): v = 0 ...
Wrapper to perform row-wise aggregation of list arguments and pass them to a function. The return value of the function is summed over the argument groups. Non-list arguments will be automatically cast to a list.
def rank(self, method='ordinal', ascending=True, mask=NotSpecified, groupby=NotSpecified): """ Construct a new Factor representing the sorted rank of each column within each row. Parameters ---------- method : str, {'or...
Construct a new Factor representing the sorted rank of each column within each row. Parameters ---------- method : str, {'ordinal', 'min', 'max', 'dense', 'average'} The method used to assign ranks to tied elements. See `scipy.stats.rankdata` for a full descripti...
def get_defining_component(pe_pe): ''' Get the BridgePoint component (C_C) that defines the packeable element *pe_pe*. ''' if pe_pe is None: return None if type(pe_pe).__name__ != 'PE_PE': pe_pe = one(pe_pe).PE_PE[8001]() ep_pkg = one(pe_pe).EP_PKG[8000]() if ep...
Get the BridgePoint component (C_C) that defines the packeable element *pe_pe*.
def translate_config(self, profile, merge=None, replace=None): """ Translate the object to native configuration. In this context, merge and replace means the following: * **Merge** - Elements that exist in both ``self`` and ``merge`` will use by default the values in ``merge`...
Translate the object to native configuration. In this context, merge and replace means the following: * **Merge** - Elements that exist in both ``self`` and ``merge`` will use by default the values in ``merge`` unless ``self`` specifies a new one. Elements that exist only in ``self...
def Logger(name, **kargs): """ Create and return logger """ path_dirs = PathDirs(**kargs) logging.captureWarnings(True) logger = logging.getLogger(name) logger.setLevel(logging.INFO) handler = logging.handlers.WatchedFileHandler(os.path.join( path_dirs.meta_dir, 'vent.log')) handler....
Create and return logger
def avg(self): """return the mean value""" # XXX rename this method if len(self.values) > 0: return sum(self.values) / float(len(self.values)) else: return None
return the mean value
def is_translocated(graph: BELGraph, node: BaseEntity) -> bool: """Return true if over any of the node's edges, it is translocated.""" return _node_has_modifier(graph, node, TRANSLOCATION)
Return true if over any of the node's edges, it is translocated.
def annual_reading_counts(kind='all'): """ Returns a list of dicts, one per year of reading. In year order. Each dict is like this (if kind is 'all'): {'year': datetime.date(2003, 1, 1), 'book': 12, # only included if kind is 'all' or 'book' 'periodical': 18, ...
Returns a list of dicts, one per year of reading. In year order. Each dict is like this (if kind is 'all'): {'year': datetime.date(2003, 1, 1), 'book': 12, # only included if kind is 'all' or 'book' 'periodical': 18, # only included if kind is 'all' or 'periodical' ...
def get_embedded_tweet(tweet): """ Get the retweeted Tweet OR the quoted Tweet and return it as a dictionary Args: tweet (Tweet): A Tweet object (not simply a dict) Returns: dict (or None, if the Tweet is neither a quote tweet or a Retweet): a dictionary representing the quote ...
Get the retweeted Tweet OR the quoted Tweet and return it as a dictionary Args: tweet (Tweet): A Tweet object (not simply a dict) Returns: dict (or None, if the Tweet is neither a quote tweet or a Retweet): a dictionary representing the quote Tweet or the Retweet
def get_initial_states(self, input_var, init_state=None): """ :type input_var: T.var :rtype: dict """ initial_states = {} for state in self.state_names: if state != "state" or not init_state: if self._input_type == 'sequence' and input_var.ndim...
:type input_var: T.var :rtype: dict
def serialize_array(array, domain=(0, 1), fmt='png', quality=70): """Given an arbitrary rank-3 NumPy array, returns the byte representation of the encoded image. Args: array: NumPy array of dtype uint8 and range 0 to 255 domain: expected range of values in array, see `_normalize_array()` fmt: string ...
Given an arbitrary rank-3 NumPy array, returns the byte representation of the encoded image. Args: array: NumPy array of dtype uint8 and range 0 to 255 domain: expected range of values in array, see `_normalize_array()` fmt: string describing desired file format, defaults to 'png' quality: specifie...
def generate_strings(project_base_dir, localization_bundle_path, tmp_directory, exclude_dirs, include_strings_file, special_ui_components_prefix): """ Calls the builtin 'genstrings' command with JTLocalizedString as the string to search for, and adds strings extracted from UI elements i...
Calls the builtin 'genstrings' command with JTLocalizedString as the string to search for, and adds strings extracted from UI elements internationalized with 'JTL' + removes duplications.
def get_satellites_list(self, sat_type): """Get a sorted satellite list: master then spare :param sat_type: type of the required satellites (arbiters, schedulers, ...) :type sat_type: str :return: sorted satellites list :rtype: list[alignak.objects.satellitelink.SatelliteLink] ...
Get a sorted satellite list: master then spare :param sat_type: type of the required satellites (arbiters, schedulers, ...) :type sat_type: str :return: sorted satellites list :rtype: list[alignak.objects.satellitelink.SatelliteLink]
def field2parameter(self, field, name="body", default_in="body"): """Return an OpenAPI parameter as a `dict`, given a marshmallow :class:`Field <marshmallow.Field>`. https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#parameterObject """ location = field.m...
Return an OpenAPI parameter as a `dict`, given a marshmallow :class:`Field <marshmallow.Field>`. https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#parameterObject
def _evaluate(self, indices, norm_distances, out=None): """Evaluate nearest interpolation.""" idx_res = [] for i, yi in zip(indices, norm_distances): if self.variant == 'left': idx_res.append(np.where(yi <= .5, i, i + 1)) else: idx_res.appe...
Evaluate nearest interpolation.
def get_argflag(argstr_, default=False, help_='', return_specified=None, need_prefix=True, return_was_specified=False, argv=None, debug=None, **kwargs): """ Checks if the commandline has a flag or a corresponding noflag Args: argstr_ (str, list, or tu...
Checks if the commandline has a flag or a corresponding noflag Args: argstr_ (str, list, or tuple): the flag to look for default (bool): dont use this (default = False) help_ (str): a help string (default = '') return_specified (bool): returns if flag was specified or not (default =...
def get_col_width(self, col, tab): """Returns column width""" try: return self.col_widths[(col, tab)] except KeyError: return config["default_col_width"]
Returns column width
def callRemote(self, objectPath, methodName, interface=None, destination=None, signature=None, body=None, expectReply=True, autoStart=True, timeout=None, returnSignatur...
Calls a method on a remote DBus object and returns a deferred to the result. @type objectPath: C{string} @param objectPath: Path of the remote object @type methodName: C{string} @param methodName: Name of the method to call @type interface: None or C{string} @p...
def load_delimited(filename, converters, delimiter=r'\s+'): r"""Utility function for loading in data from an annotation file where columns are delimited. The number of columns is inferred from the length of the provided converters list. Examples -------- >>> # Load in a one-column list of even...
r"""Utility function for loading in data from an annotation file where columns are delimited. The number of columns is inferred from the length of the provided converters list. Examples -------- >>> # Load in a one-column list of event times (floats) >>> load_delimited('events.txt', [float]) ...
def install_documentation(path="./Litho1pt0-Notebooks"): """Install the example notebooks for litho1pt0 in the given location WARNING: If the path exists, the Notebook files will be written into the path and will overwrite any existing files with which they collide. The default path ("./Litho1pt0-Noteb...
Install the example notebooks for litho1pt0 in the given location WARNING: If the path exists, the Notebook files will be written into the path and will overwrite any existing files with which they collide. The default path ("./Litho1pt0-Notebooks") is chosen to make collision less likely / problematic ...
def InsertFloatArg(self, string="", **_): """Inserts a Float argument.""" try: float_value = float(string) return self.InsertArg(float_value) except (TypeError, ValueError): raise ParseError("%s is not a valid float." % string)
Inserts a Float argument.
def _delete_json(self, instance, space=None, rel_path=None, extra_params=None, id_field=None, append_to_path=None): """ Base level method for removing data from the API """ model = type(instance) # Only API.spaces and API.event should not provide # the `space argument ...
Base level method for removing data from the API
def emit_reset(self): """Resets the device to a blank state.""" for name in self.layout.axes: params = self.layout.axes_options.get(name, DEFAULT_AXIS_OPTIONS) self.write_event(ecodes.EV_ABS, name, int(sum(params[1:3]) / 2)) for name in self.layout.buttons: s...
Resets the device to a blank state.
def setup(app) -> Dict[str, Any]: """ Sets up Sphinx extension. """ app.connect("doctree-read", on_doctree_read) app.connect("builder-inited", on_builder_inited) app.add_css_file("uqbar.css") app.add_node( nodes.classifier, override=True, html=(visit_classifier, depart_classifier) ...
Sets up Sphinx extension.
def payments_for_address(self, address): "return an array of (TX ids, net_payment)" URL = self.api_domain + ("/address/%s?format=json" % address) d = urlopen(URL).read() json_response = json.loads(d.decode("utf8")) response = [] for tx in json_response.get("txs", []): ...
return an array of (TX ids, net_payment)
def quadvgk(feval, fmin, fmax, tol1=1e-5, tol2=1e-5): """ numpy implementation makes use of the code here: http://se.mathworks.com/matlabcentral/fileexchange/18801-quadvgk We here use gaussian kronrod integration already used in gpstuff for evaluating one dimensional integrals. This is vectorised quadra...
numpy implementation makes use of the code here: http://se.mathworks.com/matlabcentral/fileexchange/18801-quadvgk We here use gaussian kronrod integration already used in gpstuff for evaluating one dimensional integrals. This is vectorised quadrature which means that several functions can be evaluated at the sa...
def get_source_url(obj): """Get the source url for a Trust object. Args: obj (ChainOfTrust or LinkOfTrust): the trust object to inspect Raises: CoTError: if repo and source are defined and don't match Returns: str: the source url. """ source_env_prefix = obj.context.c...
Get the source url for a Trust object. Args: obj (ChainOfTrust or LinkOfTrust): the trust object to inspect Raises: CoTError: if repo and source are defined and don't match Returns: str: the source url.
def build(self, region=None, profile=None): """Get or create the provider for the given region and profile.""" with self.lock: # memoization lookup key derived from region + profile. key = "{}-{}".format(profile, region) try: # assume provider is in p...
Get or create the provider for the given region and profile.
def get_provider_name(driver): """ Return the provider name from the driver class :param driver: obj :return: str """ kls = driver.__class__.__name__ for d, prop in DRIVERS.items(): if prop[1] == kls: return d return None
Return the provider name from the driver class :param driver: obj :return: str
def arange_col(n, dtype=int): """ Returns ``np.arange`` in a column form. :param n: Length of the array. :type n: int :param dtype: Type of the array. :type dtype: type :returns: ``np.arange`` in a column form. :rtype: ndarray """ return np.reshape(np.arange(n, dt...
Returns ``np.arange`` in a column form. :param n: Length of the array. :type n: int :param dtype: Type of the array. :type dtype: type :returns: ``np.arange`` in a column form. :rtype: ndarray
def delete_priority_class(self, name, **kwargs): """ delete a PriorityClass This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.delete_priority_class(name, async_req=True) >>> result = ...
delete a PriorityClass This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.delete_priority_class(name, async_req=True) >>> result = thread.get() :param async_req bool :param str name: ...
def shade_jar(self, shading_rules, jar_path): """Shades a jar using the shading rules from the given jvm_binary. This *overwrites* the existing jar file at ``jar_path``. :param shading_rules: predefined rules for shading :param jar_path: The filepath to the jar that should be shaded. """ self....
Shades a jar using the shading rules from the given jvm_binary. This *overwrites* the existing jar file at ``jar_path``. :param shading_rules: predefined rules for shading :param jar_path: The filepath to the jar that should be shaded.
def get_outputs(sym, params, in_shape, in_label): """ Infer output shapes and return dictionary of output name to shape :param :class:`~mxnet.symbol.Symbol` sym: symbol to perform infer shape on :param dic of (str, nd.NDArray) params: :param list of tuple(int, ...) in_shape: list of all...
Infer output shapes and return dictionary of output name to shape :param :class:`~mxnet.symbol.Symbol` sym: symbol to perform infer shape on :param dic of (str, nd.NDArray) params: :param list of tuple(int, ...) in_shape: list of all input shapes :param in_label: name of label typicall...
def transfer(self, data): """Transfers data over SPI. Arguments: data: The data to transfer. Returns: The data returned by the SPI device. """ settings = self.transfer_settings settings.spi_tx_size = len(data) self.transfer_settings = set...
Transfers data over SPI. Arguments: data: The data to transfer. Returns: The data returned by the SPI device.
def _detectEncoding(self, xml_data, isHTML=False): """Given a document, tries to detect its XML encoding.""" xml_encoding = sniffed_xml_encoding = None try: if xml_data[:4] == '\x4c\x6f\xa7\x94': # EBCDIC xml_data = self._ebcdic_to_ascii(xml_data) ...
Given a document, tries to detect its XML encoding.
def check_publication_state(publication_id): """Check the publication's current state.""" with db_connect() as db_conn: with db_conn.cursor() as cursor: cursor.execute("""\ SELECT "state", "state_messages" FROM publications WHERE id = %s""", (publication_id,)) publication_state, ...
Check the publication's current state.
def by_period(self, field=None, period=None, timezone=None, start=None, end=None): """ Create a date histogram aggregation using the last added aggregation for the current object. Add this date_histogram aggregation into self.aggregations :param field: the index field to create the hist...
Create a date histogram aggregation using the last added aggregation for the current object. Add this date_histogram aggregation into self.aggregations :param field: the index field to create the histogram from :param period: the interval which elasticsearch supports, ex: "month", "week" and su...