id stringlengths 5 11 | text stringlengths 0 146k | title stringclasses 1 value |
|---|---|---|
doc_23537600 | My goal is to acquire the most frequent words from the document.
The problem that Counter() does not work properly with my file.
Here is the code:
#1. Building a Counter with bag-of-words
import pandas as pd
df = pd.read_excel('combined_file.xlsx', index_col=None)
import nltk
from nltk.tokenize import word_tokenize
# Tokenize the article: tokens
df['tokens'] = df['body'].apply(nltk.word_tokenize)
# Convert the tokens into string values
df_tokens_list = df.tokens.tolist()
# Convert the tokens into lowercase: lower_tokens
lower_tokens = [[string.lower() for string in sublist] for sublist in df_tokens_list]
# Import Counter
from collections import Counter
# Create a Counter with the lowercase tokens: bow_simple
bow_simple = Counter(x for xs in lower_tokens for x in set(xs))
# Print the 10 most common tokens
print(bow_simple.most_common(10))
#2. Text preprocessing practice
# Import WordNetLemmatizer
from nltk.stem import WordNetLemmatizer
# Retain alphabetic words: alpha_only
alpha_only = [t for t in bow_simple if t.isalpha()]
# Remove all stop words: no_stops
from nltk.corpus import stopwords
no_stops = [t for t in alpha_only if t not in stopwords.words("english")]
# Instantiate the WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
# Lemmatize all tokens into a new list: lemmatized
lemmatized = [wordnet_lemmatizer.lemmatize(t) for t in no_stops]
# Create the bag-of-words: bow
bow = Counter(lemmatized)
print(bow)
# Print the 10 most common tokens
print(bow.most_common(10))
The most frequent words after preprocessing are:
[('dry', 3), ('try', 3), ('clean', 3), ('love', 2), ('one', 2), ('serum', 2), ('eye', 2), ('boot', 2), ('woman', 2), ('cream', 2)]
This is not true if we count these words by hand in excel.
Do you have any idea what might be wrong with my code? I would appreciate any help in that regard.
The link to the file is here:
https://www.dropbox.com/scl/fi/43nu0yf45obbyzprzc86n/combined_file.xlsx?dl=0&rlkey=7j959kz0urjxflf6r536brppt
A: The problem is that the bow_simple value is a counter, which you further process. This means that all items will appear only once in the list, the end result is merely counting how many variations of the words appear in the counter when lowered and processed with nltk. The solution is to create a flattened wordlist and feed that into alpha_only:
# Create a Counter with the lowercase tokens: bow_simple
wordlist = [item for sublist in lower_tokens for item in sublist] #flatten list of lists
bow_simple = Counter(wordlist)
Then use wordlist in alpha_only:
alpha_only = [t for t in wordlist if t.isalpha()]
Output:
[('eye', 3617), ('product', 2567), ('cream', 2278), ('skin', 1791), ('good', 1081), ('use', 1006), ('really', 984), ('using', 928), ('feel', 798), ('work', 785)]
| |
doc_23537601 | EXAMPLE: I want the second maximum (5.), between the 7th and the 11th position inside the array.
import numpy as np
b = np.array([3, np.nan, 5.3, 7., 8,5., 0, 1, 3, 5., 2.4, .1, .3, 0.5])
c = np.nanmax(a)
d = np.nanargmax(b)
I tried to build my own function; it fails because of the NaN's -- and it's ugly. See below.
def rightmax(vector,s,f):
l = 0
peak = 0
ml = 0
for val in vector:
if l < s or l >= f:
continue
elif val > peak:
peak = val
ml = l
l = l+1
return peak, ml
A: It sounds like you want to find the last local maxima in the array. I.e. in your example there are two local maxima of 8 and 5. at positions 4 and 9 respectively (0 based array counting). So you are looking for an answer of 5., 9. Assuming I've interpreted this correctly then just grabbing the max values isn't going to get you the answer. You need to find the maxima as the values go up and down along the vector.
You can use argrelextrema from scipy.signal to find the maxima. However it does not handle nan values without some treatment.
Assuming the nan values should not affect the outcome then you could safely replace them by interpolating between adjacent values e.g. using a simple average. e.g. in your example array you could process it to replace np.nan with (5.3 + 3)/2. Giving 4.15 (this ensures you don't promote a nan to a minima or maxima accidentally which could happen if you assume either a very small or very large value to replace them). Once you have done this you can apply argrelextrema easily:
import numpy as np
from scipy.signal import argrelextrema
# array processed to replace nan values
b = np.array([3, np.nan, 5.3, 7., 8,5., 0, 1, 3, 5., 2.4, .1, .3, 0.5])
mask = np.isnan(data)
b[mask] = np.interp(np.flatnonzero(mask), np.flatnonzero(~mask), b[~mask])
c = argrelextrema(b, np.greater)
maxIdx = c[-1] #last element of c
maxVal = b[maxIdx]
A: You said you wanted this in Python; does this handle things for you? Python simply ignores NaN values in most of its built-in functions.
import numpy as np
def local_max(a, start, finish):
local = a[start:finish+1]
loc_max = max(local)
loc_pos = local.index(loc_max) + start
return loc_max, loc_pos
data = [3, np.nan, 5.3, 7.0, 8, 5.0, 0, 1, 3, 5.0, 2.4, 0.1, 0.3, 0.5]
print local_max(data, 7, 11)
print local_max(data, 0, 5)
| |
doc_23537602 | In fact, i can't find post-checkout.sample hook in the hooks repository under /.git repository.
Is post-checkout.sample hook supported on windows ?
When i installed the same version of git on linux i found the post-checkout.sample hook.
I even tried with the git 2.23.0 version and i had the same problem.
I tried to create post-checkout that print a simple message "hello". But it doesn't work. However when I copied this file in pre-commit it works.
Any suggestions?
A: I never saw a post-checkout.sample in mingw64/share/git-core/templates/hooks/ of a Git For Windows distribution.
But that hook should work, provided you make it:
*
*a file named "post-checkout"
*a bash script (see an example here)
*in your repo/.git/hooks folder
There was actually a proposal (RFC) for a post-checkout.sample in 2009, but it was not picked up at the time.
The question was asked (also in 2009):
I also noticed that the
post-checkout sample does not exist when I init a new archive. Is this a
bug?
No, it's security.
Hooks are executable files and shouldn't blindly be
copied around for security reasons.
A: It seems that it doesn't work on an empty repository.
I just committed a file in my repository and when i excute git checkout -b new_branch, the post-checkout hook worked.
| |
doc_23537603 | root@2c3549fe3169:/sample# cargo
error: command failed: 'cargo'
info: caused by: No such file or directory (os error 2)
The weird thing is, I can see the executables
root@2c3549fe3169:/sample# ls -l /root/.cargo/bin/
total 101440
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 cargo
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 cargo-clippy
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 cargo-fmt
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rls
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rust-gdb
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rust-lldb
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rustc
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rustdoc
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rustfmt
-rwxr-xr-x 10 root root 10383380 Feb 17 21:34 rustup
root@2c3549fe3169:/sample# date
Sun Feb 17 21:34:21 UTC 2019
root@2c3549fe3169:/sample# file /root/.cargo/bin/cargo
/root/.cargo/bin/cargo: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.9, with debug_info, not stripped
root@2c3549fe3169:/sample# cargo
error: command failed: 'cargo'
info: caused by: No such file or directory (os error 2)
It is installed via:
RUN curl https://sh.rustup.rs -sSf | sh -s -- \
--default-toolchain 1.32.0 \
-y && \
~/.cargo/bin/rustup target add i686-unknown-linux-musl && \
echo "[build]\ntarget = \"i686-unknown-linux-musl\"" > ~/.cargo/config
I can see the file but I cannot seem to run it, even when I switch into that directory:
root@2c3549fe3169:~/.cargo/bin# ./cargo
error: command failed: 'cargo'
info: caused by: No such file or directory (os error 2)
This is what I see when running ldd:
root@4e21c8d37266:/volume# ldd /root/.cargo/bin/cargo
linux-gate.so.1 (0xf7f41000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf774c000)
librt.so.1 => /lib/i386-linux-gnu/librt.so.1 (0xf7742000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf7723000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xf7705000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7529000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7427000)
/lib/ld-linux.so.2 (0xf7f43000)
This is my complete Dockerfile
FROM i386/ubuntu
RUN apt-get update && apt-get install -y \
cmake \
curl \
file \
git \
g++ \
python \
make \
nano \
ca-certificates \
xz-utils \
musl-tools \
pkg-config \
apt-file \
xutils-dev \
--no-install-recommends && \
rm -rf /var/lib/apt/lists/*
RUN curl https://sh.rustup.rs -sSf | sh -s -- \
--default-toolchain 1.32.0 \
-y && \
~/.cargo/bin/rustup target add i686-unknown-linux-musl && \
echo "[build]\ntarget = \"i686-unknown-linux-musl\"" > ~/.cargo/config
# Compile C libraries with musl-gcc
ENV SSL_VER=1.0.2j \
CURL_VER=7.52.1 \
CC=musl-gcc \
PREFIX=/usr/local \
PATH=/usr/local/bin:$PATH \
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
RUN curl -sL http://www.openssl.org/source/openssl-$SSL_VER.tar.gz | tar xz && \
cd openssl-$SSL_VER && \
./Configure no-shared --prefix=$PREFIX --openssldir=$PREFIX/ssl no-zlib -m32 linux-elf -fPIC -fno-stack-protector && \
make depend 2> /dev/null && make -j$(nproc) && make install && \
cd .. && rm -rf openssl-$SSL_VER
RUN curl https://curl.haxx.se/download/curl-$CURL_VER.tar.gz | tar xz && \
cd curl-$CURL_VER && \
./configure --enable-shared=no --enable-static=ssl --enable-optimize --prefix=$PREFIX --host=i686-pc-linux-gnu CFLAGS=-m32 \
--with-ca-path=/etc/ssl/certs/ --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt --without-ca-fallback && \
make -j$(nproc) && make install && \
cd .. && rm -rf curl-$CURL_VER
ENV SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt \
SSL_CERT_DIR=/etc/ssl/certs \
OPENSSL_LIB_DIR=$PREFIX/lib \
OPENSSL_INCLUDE_DIR=$PREFIX/include \
OPENSSL_DIR=$PREFIX \
OPENSSL_STATIC=1 \
PATH=/usr/local/bin:/root/.cargo/bin:$PATH
RUN echo $PATH
And strace'ing the cargo binary as per comments:
root@156da6108ff8:~/.cargo/bin# strace -f -e trace=execve cargo
execve("/root/.cargo/bin/cargo", ["cargo"], 0xfffdd8fc /* 20 vars */) = 0
execve("/root/.rustup/toolchains/1.32.0-x86_64-unknown-linux-gnu/bin/cargo", ["/root/.rustup/toolchains/1.32.0-"...], 0x57d95620 /* 25 vars */) = -1 ENOENT (No such file or directory)
error: command failed: 'cargo'
info: caused by: No such file or directory (os error 2)
+++ exited with 1 +++
A: So, here is the summary of our investigations.
The base image used for build is i386/ubuntu with 32-bit environment inside, however, this image does nothing to appropriately mask the results of uname(2) calls (by having something like setarch linux32 as entrypoint), so, when running on 64-bit system (your case), any process inside the build container calling uname(2) or uname(1) sees x86_64 instead of i686. This is the root of the problem.
When you install cargo, you download and run the installation script, which detects the platform it runs on and downloads the appropriate version of rustup-init. The platform detection in this script recognizes correctly that it runs in 32-bit environment but on 64-bit kernel, so the script downloads 32-bit version of rustup-init. However, rustup-init decides that it runs on x86_64 (probably it sees x86_64 returned by uname(2), but does not perform the check for "32-bit environment on 64-bit kernel" case, like the installer script does). You can see it during the installation without -y:
Current installation options:
default host triple: x86_64-unknown-linux-gnu
default toolchain: stable
modify PATH variable: yes
So, rustup installs 64-bit toolchain, and you end up with situation when calling cargo results in running 64-bit binary in 32-bit environment, so you see the error.
I still feel some sort of inconsistent behavior here, because both the installation script and rustup-init are parts of the same project and I don't really see the reason why should they detect the platform differently in the same environment (why can't rustup-init just be as smart as the installation script is?).
As @Shepmaster noticed, this is a known issue (Rustup installs 64bit compiler on a 32bit Docker instance). There are two workarounds possible:
*
*force the platform for the default toolchain by passing --default-host i686-unknown-linux-gnu to the installer;
*fool the installer by running it under setarch linux32 so that its call to uname(2) will see i686 instead of x86_64.
Personally, I would choose the first option, as it seems to be less hacky.
| |
doc_23537604 | {
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"client": "mongo",
"uri": "${process.env.MONGO_URL}",
"database": "${process.env.DATABASE_NAME}",
"username": "${process.env.DATABASE_USERNAME}",
"password": "${process.env.DATABASE_PASSWORD}",
"port": "${process.env.DATABASE_PORT || 27017}"
},
"options": {
"authenticationDatabase": "${process.env.DATABASE_AUTHENTICATION_DATABASE || ''}",
"useUnifiedTopology": "${process.env.USE_UNIFIED_TOPOLOGY || false}",
"ssl": "${process.env.DATABASE_SSL || false}"
}
}
}
}
Here is my config/environments/production/server.json
{
"host": "${process.env.HOST || '0.0.0.0'}",
"port": "${process.env.PORT || 1337}",
"production": true,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
I believe the original issue was that I was accidentally using the PORT variable for the database instead of the DATABASE_PORT variable.
However, now that I have that worked out I am getting this error:
error Error: listen EADDRNOTAVAIL: address not available <my-host-ip>:5000
I thought maybe there was some wrong port being cached somewhere, but regardless of what I do, I can't seem to get it to work. Do I need to enable ssl? and then add a letsencrypt cert to my domain? am i using the wrong ports? set a proxy in the server.json?
PS. I am using Dokku Mongo. Didn't think that would be an issue considering the dynos don't go to sleep like they would on heroku. Is that an incorrect assumption?
Also, there are other apps running on the droplet. Maybe a proxy problem?
| |
doc_23537605 | python -m http.server 8000
It works on Chrome on my PC, but when I try it on Chrome Mobile on a smartphone I keep getting this error: localhost refused the connection,
ERR_CONNECTION_REFUSED . How do I solve this ?
A: It's the localhost of your computer, not your phone. . .
If the phone is on the same wifi as the computer - try to connect to the computers (running the server) local IP address.
| |
doc_23537606 | Here is an example of the code i use:
$texttoprint = "RECIPT TEXT \n NEXT LINE \n MORE STUFF";
$texttoprint = stripslashes($texttoprint);
$fp = fsockopen("192.168.192.168", 9100, $errno, $errstr, 10);
if (!$fp) {
echo "$errstr ($errno)<br />\n";
} else {
fwrite($fp, "\033\100");
$out = $texttoprint . "\r\n";
fwrite($fp, $out);
fwrite($fp, "\012\012\012\012\012\012\012\012\012\033\151\010\004\001");
fclose($fp);
}
If you can tell me how I can change the font size of a particulare line that would be amazing. thanks
From what I have read the 012/012 are ESC/p codes, here is a link to the manual -
http://files.support.epson.com/pdf/general/escp2ref.pdf
But I dont understand how to apply this:
ESC E - SELECT BOLD FONT - C110
Answer:
Just for users who may need to know here is what you need, I found the answers in a python lib (http://code.google.com/p/python-escpos/downloads/list)
here is a list of how to do the ESC codes (found it in a python lib) http://sheepy121.webhost4life.com/ESC.txt
Here is the document for all ESC codes http://files.support.epson.com/pdf/general/escp2ref.pdf
and here is the code to use PHP printing to local Thermal Printer, does not work without networkname.
Happy printing
A: This is more a task for you to understand the manual. Not sure how you have done the rest before.
On page C3 of the manual you get a command overview. ESC E is the command to select a bold font (details on page C110). You want to change the font size so you need ESC P, ESC M or ESC g.
ESC stands for the escape character, decimal index 27 in ASCII table or hex 1B or octal 33. Place "\033P" within your string to try out as that is the way you include a special char with octal code in PHP String manual.
A: use php print option. you can change font size and font family also..
here the sample php code...
header('Content-Type: text/plain; charset=UTF-8');
$printer = "\\\\BALA\\EPSON TM-T88IV Receipt";
$handle = printer_open($printer);
printer_start_doc($handle,"Testpage");
printer_start_page($handle);
$font = printer_create_font("Arial", 20, 10, 700, false, false, false, 0);
$pen = printer_create_pen(PRINTER_PEN_DOT, 1, "000000");
printer_select_pen($handle, $pen);
printer_select_font($handle, $font);
printer_draw_text($handle, "welcome", 10, 10);
printer_delete_font($font);
printer_delete_pen($pen);
printer_end_page($handle);
printer_end_doc($handle);
printer_close($handle);
| |
doc_23537607 | I need to check if there are 2 same, if so, delete the old one.
If there is only one, do nothing.
DELETE FROM files
WHERE url
IN (SELECT id FROM files WHERE url='$url' ORDER BY date ASC LIMIT 1)
I'm getting this error:
#1235 - This version of MariaDB doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
Can you help me please? Thanks.
A: First you need to get duplicated urls and then find the first Id of it by using Row_Number() like this:
DELETE FROM files
WHERE id IN (
SELECT t.id
FROM
(
SELECT
id,
ROW_NUMBER() OVER (PARTITION BY url ORDER BY date ASC) rw
FROM files
WHERE url IN (
SELECT url
FROM files
GROUP BY url
HAVING COUNT(*) > 1
)
) t
WHERE t.rw = 1
)
A: The task is rather simple: you want to delete files for which exists a newer entry.
DELETE FROM files
WHERE EXISTS
(
SELECT NULL
FROM files newer
WHERE newer.url = files.url AND newer.date > files.date
);
An index to support this statement would look like this:
CREATE INDEX idx ON files (url, date);
The above would be my preferred approach. But there are other methods of course. For instance:
DELETE FROM files
WHERE (url, date) NOT IN
(
SELECT url, MAX(date)
FROM files
GROUP BY url
);
It's the same index that would help this statement, too.
Both statements remove all duplicates, no matter whether you have two entries for a URL or hundreds.
If this is really only about deleting the oldest row, however, (because there cannot be more than two entries for a URL or because you want to keep duplicates except for the oldest), this gets faster this way:
DELETE FROM files
WHERE (url, date) IN
(
SELECT url, MIN(id)
FROM files
-- WHERE url = @url -- add this, if this is only about one URL
GROUP BY url
HAVING COUNT(*) > 1
);
| |
doc_23537608 | The array looks something like this:
var elem = ["Joe", "M"+String.fromCharCode(13)+"ry", "Element_03", "Element_04"];
Attempted using a for loop to scan through the array and conditionally check each element for ASCII code but I couldn't come up with anything.
A: var hash={};
elem.forEach(function(str){
for(var i=0;i<str.length;i++){
hash[str.charCodeAt(i)]=true;
}
});
console.log(Object.keys(hash));
Simply iterate over the array and chars, and add each char code into a hash table.
A: If I understand your question correctly, you're trying to find non-alphanumeric characters in each string of the array. For example, CharCode 13 is a carriage return. Depending on what you consider to be "special" this might work.
var elem = ["Joe", "M"+String.fromCharCode(13)+"ry", "Element_03", "Element_04"];
var codesFound = {};
elem.join('').split('').forEach(char => {
var code = char.charCodeAt(0);
if ( code < 32 || code > 126 ) {
codesFound[code] = true;
}
});
console.log(Object.keys(codesFound));
I'm using this table as a guide. But you can get the just from my code.
http://www.asciitable.com/
| |
doc_23537609 | Requirement is I have to convert absolute time into relative time:
Input:
2013/06/19 05:16:51:209 INFO
2013/06/19 05:16:54:365 INFO
2013/06/19 05:16:54:365 INFO
Expected output :
000000.000000 INFO
000003.156000 INFO
000003.156000 INFO
So here I have to take 05:16:51:209 as a reference time and make it 0 then need to subtract it with next time.
Please let me know if there is any function available for the same.
A: Its a very complex problem (i like that :D ). But i will give you a solution (its not finished, but a huge of your problems should be cleared):
use DateTime::Format::Strptime;
my $parser = DateTime::Format::Strptime->new(
pattern => '%y/%m/%d %H:%M:%S:%N',
on_error => 'croak',
);
my @dates = (
'2013/06/19 05:16:51:209 INFO',
'2013/06/19 05:16:54:365 INFO',
'2013/06/19 05:16:54:365 INFO',
);
my %dates;
$dates{$_} = $parser->parse_datetime( $_ ) foreach @dates;
I use [DateTime::Format::Strptime] for parsing the Datetime and getting a DateTime Object back. Then you just need to call one of the delta-Methods of the DateTime Module to get what you need :)
Here are some links for you:
*
*http://p3rl.org/DateTime::Format::Strptime
*http://p3rl.org/DateTime
| |
doc_23537610 | $googlemapstatic="http://maps.googleapis.com/maps/api/staticmap?center=(location)&zoom=7&size=1000x1000&markers=color%3ablue|label%3aS|11211&sensor=false&markers=size:mid|color:0x000000|label:1|(location)";
and i have x and y for its latitude and longitude
$koorX='32.323213123';
$koorY='39.3213';
and im using str replace for changing static maps location and marker inside it.
$newlocation=$koorX.','.$koorY;
$googlemapstatic=str_replace('location',$newlocation,$googlemapstatic);
but it shows me different location than input.
<img style='width:15.61cm; height:12.0cm' src=".$googlemapstatic.'>
If i write that x,y manually from browser, it show correct location.
I assume that there is some mistake in str_replace function but i couldn't find it.
A: You could try this one :
$googlemapstatic = str_replace('(location)','('.$newlocation.')',$googlemapstatic);
A: use
ini_set('display_errors','on');
error_reporting(E_ALL);
if deprecated kind of error then try to use str_ireplace
$newlocation=$koorX.','.$koorY;
$googlemapstatic=str_ireplace('location',$newlocation,$googlemapstatic);
| |
doc_23537611 | Here is my Code:
char *strPtr = NULL;
char tmpChar = "";
inputFile = fopen(input_file, "r");
fseek(inputFile, 0, SEEK_END); // seek to end of file
fileSize = ftell(inputFile); // get current file pointer
rewind(inputFile);
strPtr = (char*) realloc(strPtr, fileSize * sizeof(char));
int counter = 0;
while ((tmpChar = fgetc(inputFile)) != EOF)
{
strPtr[counter] = tmpChar;
counter++;
if (counter == fileSize)
printf("OK!");
}
printf("Filesize: %d, Counter: %d", fileSize,counter);
Now to my Problem ... With the last printf I get 2 different values for example: Filesize 127 & Counter 118.
Addtionally at the END of my strPtr-Variable there is a wrong input like "ÍÍÍÍÍÍÍÍÍýýýýüe".
Notepad++ also says at the end of the file that I am at postion 127, so whats the Problem about the 118?
A: If you open the file in text mode (the default) on Windows, the CRT file functions will convert any \r\n to \n. The effect of this is every line you read will be 1 byte shorter than the original with \r\n.
To prevent such conversions, use "binary" mode, by adding a "b" mode modifier, e.g. "rb".
inputFile = fopen("example.txt", "rb")
https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fopen-wfopen?view=vs-2019
In text mode, carriage return-linefeed combinations are translated into single linefeeds on input, and linefeed characters are translated to carriage return-linefeed combinations on output.
while ((tmpChar = fgetc(inputFile)) != EOF)
{
strPtr[counter] = tmpChar;
counter++;
if (counter == fileSize)
printf("OK!");
}
Additionally, this loop, assuming the file does not contain any NULL values will not null terminated your string. If you later use strPtr in such a way that one is expected (e.g. printf, strcmp, etc.) it will read past the valid range.
If you do want a null terminator, you need to add one after. To do this you also need to be sure you allocated an extra byte.
realloc(strPtr, (fileSize + 1) * sizeof(char));
while (...
strPtr[counter] = '\0'; // Add null terminator at end.
To handle files/strings that might contain nulls you can't use null terminated strings at all (e.g. use memcmp with size instead of strcmp).
| |
doc_23537612 | @interface BNPieChart : UIView {
@private
NSMutableArray* slicePointsIn01;
}
m
- (id)initWithFrame:(CGRect)frame {
if (self = [super initWithFrame:frame]) {
[self initInstance];
self.frame = frame;
slicePointsIn01 = [[NSMutableArray alloc]
initWithObjects:nFloat(0.0), nil];
- (void)initInstance {
slicePointsIn01 = [[NSMutableArray alloc]
initWithObjects:nFloat(0.0), nil];
I did try adding a property / synthesize / dealloc for slicePointsIn01 however this gives me the same error.
What am I doing wrong ?
A: slicePointsIn01 gets set to two different objects: one in initInstance, and then one again later in initWithFrame:.
Because the first one was set to an alloc'd object, then that object was never release before you changed the assignment, the original object gets leaked.
If you add a property, you need to make sure you're actually using it, and not using the instance variable directly. You would do that by doing assignments in one of the two following ways:
self.myProperty = //something;
[self setMyProperty: //something];
Note (thanks @André): make sure the something object has a retain count of 0 upon assigning (i.e. autoreleased, usually), because the property retains it for you.
//NOT like this:
myProperty = //something;
This line uses the instance variable directly. It causes your leak because without using the property, the reference count on the object pointed to is not altered.
Edit:
You shouldn't ever check the retain count. Just follow the rules within each place you use the object, and you will be fine. Here are the rules:
*
*You own any object you create by allocating memory for it or copying it, i.e. with the methods alloc, allocWithZone:, copy, copyWithZone:, mutableCopy, mutableCopyWithZone:
*If you are not the creator of an object, but want to ensure it stays in memory for you to use, you can express an ownership interest in it by calling retain
*If you own an object, either by creating it or expressing an ownership interest, you are responsible for releasing it when you no longer need it, by calling release or autorelease
*Conversely, if you are not the creator of an object and have not expressed an ownership interest, you must not release it.
*If you receive an object from elsewhere in your program, it is normally guaranteed to remain valid within the method or function it was received in. If you want it to remain valid beyond that scope, you should retain or copy it. If you try to release an object that has already been deallocated, your program crashes.
You don't have to write setMyProperty. When you @synthesize a property, that method is created for you.
| |
doc_23537613 | Additionally, how do I bundle in a CA certificate to be included within the PFX file?
// Generate the private/public keypair
RsaKeyPairGenerator kpgen = new RsaKeyPairGenerator ();
CryptoApiRandomGenerator randomGenerator = new CryptoApiRandomGenerator ();
kpgen.Init (new KeyGenerationParameters (new SecureRandom (randomGenerator), 2048));
AsymmetricCipherKeyPair keyPair = kpgen.GenerateKeyPair ();
// Generate the CSR
X509Name subjectName = new X509Name ("CN=domain.com/name=Name");
Pkcs10CertificationRequest kpGen = new Pkcs10CertificationRequest ("SHA256withRSA", subjectName, keyPair.Public, null, keyPair.Private);
string certCsr = Convert.ToBase64String (kpGen.GetDerEncoded ());
// ** certCsr is now sent to be signed **
// ** let's assume that we get "certSigned" in response, and also have the CA **
string certSigned = "[standard signed certificate goes here]";
string certCA = "[standard CA certificate goes here]";
// Now how do I import certSigned and certCA
// Finally how do I export everything as a PFX file?
A: Bouncy Castle is a very powerful library, however the lack of documentation makes it quite difficult to work with. After searching for much too long through all of the classes and methods I finally found what I was looking for. The following code will take the previously generated private key, bundle it together with the signed certificate and the CA, and then save it as a .PFX file:
// Import the signed certificate
X509Certificate signedX509Cert = new X509CertificateParser ().ReadCertificate (Encoding.UTF8.GetBytes (certSigned));
X509CertificateEntry certEntry = new X509CertificateEntry (signedX509Cert);
// Import the CA certificate
X509Certificate signedX509CaCert = new X509CertificateParser ().ReadCertificate (Encoding.UTF8.GetBytes (certCA ));
X509CertificateEntry certCaEntry = new X509CertificateEntry (signedX509CaCert);
// Prepare the pkcs12 certificate store
Pkcs12Store store = new Pkcs12StoreBuilder ().Build ();
// Bundle together the private key, signed certificate and CA
store.SetKeyEntry (signedX509Cert.SubjectDN.ToString () + "_key", new AsymmetricKeyEntry (keyPair.Private), new X509CertificateEntry[] {
certEntry,
certCaEntry
});
// Finally save the bundle as a PFX file
using (var filestream = new FileStream (@"CertBundle.pfx", FileMode.Create, FileAccess.ReadWrite)) {
store.Save (filestream, "password".ToCharArray (), new SecureRandom ());
}
Feedback and improvements are welcome!
| |
doc_23537614 |
A: The simplest way (I can think of) is to change the default values used by the UIManager. This will effect all the menu bars and menu items in the application though...
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.EventQueue;
import javax.swing.JFrame;
import javax.swing.JMenu;
import javax.swing.JMenuBar;
import javax.swing.JPanel;
import javax.swing.UIManager;
import javax.swing.UnsupportedLookAndFeelException;
public class TestMenuBar {
public static void main(String[] args) {
new TestMenuBar();
}
public TestMenuBar() {
EventQueue.invokeLater(new Runnable() {
@Override
public void run() {
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException ex) {
} catch (InstantiationException ex) {
} catch (IllegalAccessException ex) {
} catch (UnsupportedLookAndFeelException ex) {
}
UIManager.put("MenuBar.background", Color.RED);
UIManager.put("Menu.background", Color.GREEN);
UIManager.put("MenuItem.background", Color.MAGENTA);
JMenu mnu = new JMenu("Testing");
mnu.add("Menu Item 1");
mnu.add("Menu Item 2");
JMenuBar mb = new JMenuBar();
mb.add(mnu);
mb.add(new JMenu("Other"));
JFrame frame = new JFrame("Test");
frame.setJMenuBar(mb);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(new BorderLayout());
frame.add(new JPanel());
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
}
A: Simple way to do is by .setBackground(Color.RED) and setOpaque(true)
menubar.setBackground(Color.RED);
menu.setBackground(Color.yellow);
menubar.setOpaque(true);
menu.setOpaque(true);
This will give the color of your choices to both the menubar and menu.
A: Create a new class that extends JMenuBar:
public class BackgroundMenuBar extends JMenuBar {
Color bgColor=Color.WHITE;
public void setColor(Color color) {
bgColor=color;
}
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g;
g2d.setColor(bgColor);
g2d.fillRect(0, 0, getWidth() - 1, getHeight() - 1);
}
}
Now you use this class instead of JMenuBar and set the background color with setColor().
A: You would probably need to change opacity of menu items, ie:
JMenuItem item= new JMenuItem("Test");
item.setOpaque(true);
item.setBackground(Color.CYAN);
You can also achieve that globally using UIManager, for example:
UIManager.put("MenuItem.background", Color.CYAN);
UIManager.put("MenuItem.opaque", true);
A: Mine only worked when I changed:
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
to:
UIManager.setLookAndFeel(UIManager.getCrossPlatformLookAndFeelClassName());
Otherwise, the colors remained the same.
A: public void run() {
UIManager.put("MenuBar.background", new java.awt.Color(255, 245, 157));
UIManager.put("MenuBar.opaque", true);
UIManager.put("Menu.background", new java.awt.Color(255, 245, 157));
UIManager.put("Menu.opaque", true);
UIManager.put("MenuItem.background",new java.awt.Color(255, 245, 157));
UIManager.put("MenuItem.opaque", true);
new MenuPrincipal().setVisible(true);
}
The menubar does not change color, but the rest do (menu and menuitem)
A: It's very simple.
Here's the code:
menu.setBackground(Color.DARK_GRAY);
Similarly you can add your own color like GREEN, BLUE, DARK_GRAY, LIGHT_GRAY, BLACK, RED, etc..
This is the only simple way to change any color of any component in java.
Note: This is applicable for all components in Java Swing only. it is useless in JavaFX, JFace, SWT but not AWT and Swing
Thank you,
Dereck Smith
| |
doc_23537615 | I want to get these PHAssets, turn them into image, then convert them to base64 string so that i can send them to my data base.
But there's a problem: the images have really low quality when I try to get them from the PHAsset array.
Here's my code:
let requestOptions = PHImageRequestOptions()
requestOptions.version = .current
requestOptions.deliveryMode = .opportunistic
requestOptions.resizeMode = .exact
requestOptions.isNetworkAccessAllowed = true
let imagePicker = OpalImagePickerController()
imagePicker.maximumSelectionsAllowed = 4
imagePicker.allowedMediaTypes = Set([PHAssetMediaType.image])
self.presentOpalImagePickerController(imagePicker, animated: true,
select: { (assets) in
for a in assets{
// print(a)
// self.img.append(a.image)
self.img.append(a.imagehd(targetSize: CGSize(width: a.pixelWidth, height: a.pixelHeight), contentMode: PHImageContentMode.aspectFill, options: requestOptions))
and the function:
func imagehd(targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?) -> UIImage {
var thumbnail = UIImage()
let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: targetSize, contentMode: contentMode, options: options, resultHandler: { image, _ in
thumbnail = image!
})
return thumbnail
}
I tried to give "request options.version" the ".original" value, or even high quality to delivery Mode, but then it just gives me nothing (image is nil)
I'm really lost. Can someone help?
Thanks a lot.
| |
doc_23537616 | <ItemsControl VerticalAlignment="Center" HorizontalAlignment="Center" ItemsSource="{Binding Maps}">
<ItemsControl.ItemsPanel>
<ItemsPanelTemplate>
<WrapPanel/>
</ItemsPanelTemplate>
</ItemsControl.ItemsPanel>
<ItemsControl.ItemTemplate>
<DataTemplate>
<StackPanel CanVerticallyScroll="True" Margin="5">
<Image Stretch="Uniform" StretchDirection="DownOnly" Height="150" Source="{Binding Thumbnail}"/>
</StackPanel>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
This is the viewmodel: (Ignore the temporary file paths)
public BindableCollection<MapModel> Maps { get; set; }
public MapListViewModel()
{
Maps = new BindableCollection<MapModel>();
string projectPath = @"C:\Users\james\Documents\Projects\C#\Stratify";
string path = projectPath + @"/Maps";
string[] mapFiles = Directory.GetDirectories(path);
foreach(string mapFile in mapFiles)
{
var mapModel = new MapModel
{
Thumbnail = new BitmapImage(new Uri(mapFile + "/icon.jpg", UriKind.RelativeOrAbsolute))
};
Maps.Add(mapModel);
}
}
This is the MapModel :
public class MapModel : ObservableObject
{
private ImageSource thumbnail;
public ImageSource Thumbnail
{
get { return thumbnail; }
set { OnPropertyChanged(ref thumbnail, value); }
}
}
A: The property name is wrong. It should be Thumbnail, and the property type should be ImageSource:
public class MapModel : ObservableObject
{
private ImageSource thumbnail;
public ImageSource Thumbnail
{
get { return thumbnail; }
set { OnPropertyChanged(ref thumbnail, value); }
}
}
You would assign a BitmapImage like this:
var path = Path.Combine(mapFile, "icon.jpg");
var uri = new Uri(path, UriKind.RelativeOrAbsolute);
var image = new BitmapImage(uri);
var mapModel = new MapModel { Thumbnail = image };
| |
doc_23537617 | this information student id - section id -current date -Status
each time that user click the buttons it will insert this information in database
also i check if the record is duplicate if it is i will told the user you already insert
but if not it will insert correctly
i try to insert and by change every time the date
but it said you already insert
how can i solve this problem so i can insert every time a change the date in my database
A: You are only checking, if the user has given input for the fields or not.
And before inserting the record, you are not checking if such record already exists in database or not.
And may be because of unique constraint if any defined, next inserts are rejected with an exception due to data duplication in fresh input from the user.
Check if any constraints are defined for data uniqueness.
Based on them handle exceptions and inform user accordingly.
| |
doc_23537618 | /**
* Supported request methods.
*/
public interface Method {
int DEPRECATED_GET_OR_POST = -1;
int GET = 0;
int POST = 1;
int PUT = 2;
int DELETE = 3;
}
It probably wouldn't be much trouble to extend the library to support patch requests, so my question is why wouldn't patch requests be supported by the base library? Also, could anyone suggest any good git branches that have already added this support?
A: I finally found an answer to this question. It is very stupid. The problem is not with the Volley framework. HTTPUrlConnection of Java does not support PATCH. There are way on the internet that uses Java Reflection to set the method object to PATCH but they brings additional problems.
I finally solved this problem using X-HTTP-Method-Override header. I made a normal POST request with body even and add this header like below.
X-HTTP-Method-Override: PATCH
and it worked. Your web server side should support method overriding though.
| |
doc_23537619 | + 0: No
+ 1: Yes
* USED CAR: Purpose for this credit/application? (Binary)
+ 0: No
+ 1: Yes
* FURNITURE: Purpose for this credit/application? (Binary)
+ 0: No
+ 1: Yes
* RADIO/TV: Purpose for this credit/application? (Binary)
+ 0: No
+ 1: Yes
* EDUCATION: Purpose for this credit/application? (Binary)
+ 0: No
+ 1: Yes
* RETRAINING: Purpose for this credit/application? (Binary)
+ 0: No
+ 1: Yes
So these are some of the variables I have for a new dataset. Since they are related to the purpose behind loan applications, would it be wise to group them under one variable that I could call "Purpose"?
I plan on running a Random Forest Model. I want to try out both ways and I probably will as soon as I conduct an analysis as it is.
To be clear, I've already checked that two variables don't hold a "Yes" in the same row number.
Is there any downside to creating ONE vector/variable that looks like when creating decision trees?:
c("NEW_CAR", "RADIO/TV", "RETRAINING", "FURNITURE", "USED_CAR", "USED_CAR", "NEW_CAR")
I'm worried that there is a major downside as each current variable is binary and a tree can only split into two nodes. Perhaps it doesn't make a difference considering that there could be multiple nodes with a choice between the current variables.
I'm new. Thank you.
Thank You.
| |
doc_23537620 | Now i have requirement, the owner(User creating node) should be able to subscribe node to roster users and subscribed user start getting item publish.
Is there any way to achieve this?
My node creation code is below:
ConfigureForm form = new ConfigureForm(DataForm.Type.submit);
form.setPersistentItems(false);
form.setDeliverPayloads(true);
form.setAccessModel(AccessModel.open);
form.setPublishModel(PublishModel.open);
setSubscribers(form);
LeafNode node = (LeafNode) manager.createNode(nodeName, form);
A: As document says it is possible to autosubscribe to a node, based on access model which may be roster/presence/open access model.
| |
doc_23537621 | The trouble is: one of the columns I need requires that I run a sub-query within the aggregate. Which SQL does not allow.
Here is the error I am getting :
Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
Here is the initial query :
select
method,
sum(payment_id) as payment_id,
sum(status) as status,
sum(allowEmailContact) as allowEmailContact,
sum(allowPhoneContact) as allowPhoneContact,
sum(totalReservations) as totalReservations
from
(SELECT
RES.method, count(*) as payment_id,
'' as status, '' as complete_data,
'' as allowEmailContact, '' as allowPhoneContact,
'' as totalReservations
FROM
Customer CUS
INNER JOIN
Reservation RES ON CUS.id = RES.customerId
WHERE
(RES.created > '2015-05-31 23:59' and RES.created <= '2015-06-15
23:59')
AND RES.payment_id IS NOT NULL
AND scope_id = 1
GROUP BY
RES.method
UNION ALL
etc
etc
) AS results
GROUP BY
method
(I used : "etc, etc, etc" to replace a large part of the query; I assume there is no need to write the entire code, as it is very long. But, the gist is clear)
This query worked just fine.
However, I need an extra field -- a field for those customers whose data are "clean" --- meaning : trimmed, purged of garbage characters (like : */?"#%), etc.
I have a query that does that. But, the problem is: how to insert this query into my already existing query, so I can create that extra column?
This is the query I am using to "clean" customer data :
select *
from dbo.Customer
where
Len(LTRIM(RTRIM(streetAddress))) > 5 and
Len(LTRIM(RTRIM(streetAddress))) <> '' and
(Len(LTRIM(RTRIM(streetAddress))) is not null and
Len(LTRIM(RTRIM(postalCode))) = 5 and postalCode <> '00000' and
postalCode <> '' and Len(LTRIM(RTRIM(postalCode))) is not null and
Len(LTRIM(RTRIM(postalOffice))) > 2 and
phone <> '' and Len(LTRIM(RTRIM(email))) > 5 and
Len(LTRIM(RTRIM(email))) like '@' and
Len(LTRIM(RTRIM(firstName))) > 2 and Len(LTRIM(RTRIM(lastName))) > 2) and
Len(LTRIM(RTRIM(firstName))) <> '-' and Len(LTRIM(RTRIM(lastName))) <> '-' and
Len(LTRIM(RTRIM(firstName))) is not null and
Len(LTRIM(RTRIM(lastName))) is not null
etc, etc
This query works fine on its own.
But, how to INSERT it into the initial query, to create a separate field, where I can get the TOTAL of those customers who meet this "clean" criteria?
I tried it like this :
select
method,
sum(payment_id) as payment_id,
sum(status) as status,
SUM((select *
from dbo.Customer
where
Len(LTRIM(RTRIM(streetAddress))) > 5 and
Len(LTRIM(RTRIM(streetAddress))) <> '' and
(Len(LTRIM(RTRIM(streetAddress))) is not null and
Len(LTRIM(RTRIM(postalCode))) = 5 and
postalCode <> '00000' and postalCode <> '' and
Len(LTRIM(RTRIM(postalCode))) is not null and
Len(LTRIM(RTRIM(postalOffice))) > 2 and phone <> '' and
Len(LTRIM(RTRIM(email))) > 5 and
Len(LTRIM(RTRIM(email))) like '@' and
Len(LTRIM(RTRIM(firstName))) > 2 and
Len(LTRIM(RTRIM(lastName))) > 2) and
Len(LTRIM(RTRIM(firstName))) <> '-' and
Len(LTRIM(RTRIM(lastName))) <> '-' and
Len(LTRIM(RTRIM(firstName))) is not null and
Len(LTRIM(RTRIM(lastName))) is not null) ) as clean_data,
sum(allowEmailContact) as allowEmailContact, sum(allowPhoneContact) as allowPhoneContact,
sum(totalReservations) as totalReservations
from
(SELECT
RES.method, count(*) as payment_id, '' as status,
'' as complete_data, '' as allowEmailContact,
'' as allowPhoneContact, '' as totalReservations
FROM Customer CUS
INNER JOIN Reservation RES ON CUS.id = RES.customerId
WHERE (RES.created > '2015-05-31 23:59' and RES.created <= '2015-06-15
23:59')
AND RES.payment_id is not null and scope_id = 1
GROUP BY RES.method
UNION ALL
etc
etc
etc
and it gave me that "aggregate" error.
A: SELECT COUNT(*) instead of SUM(), also, the WHERE Clause to clean the data is awful. There has to be a better way. Maybe mark the rows as clean when they're updated or as a batch job?
| |
doc_23537622 | I tried various images for the fabric peer, for example:
*
*hyperledger/fabric-peer:x86_64-1.0.1
*hyperledger/fabric-peer
*hyperledger/fabric-peer:x86_64-1.0.0-rc1
*etc.
Here is my docker-compose.yml file for the fabric image:
vp0:image: "hyperledger/fabric-peer:x86_64-1.0.1"
I get the following error when running docker-compose up:
DEBU 1a9 Module 'grpc' logger enabled for log level 'ERROR'
How can I fix this problem?
A: looks more like a statement than a real ERROR. If you run docker ps are your containers running?
| |
doc_23537623 | Thanks in advance.
Here some simple code i wrote so u can maybe understand the problem a little bit better.
public void getPosts(final postCallback callback) {
final FirebaseFirestore db = FirebaseFirestore.getInstance();
CollectionReference postsRef = db.collection("Posts");
Query postsQuery = postsRef.orderBy("createTime", Query.Direction.DESCENDING).limit(20);
// Starting the post documents
Task<QuerySnapshot> task = postsQuery.get();
task.addOnCompleteListener(new OnCompleteListener<QuerySnapshot>() {
@Override
public void onComplete(@NonNull Task<QuerySnapshot> task) {
if(task.isSuccessful()){
QuerySnapshot querySnapshot = task.getResult();
List<DocumentSnapshot> docsList = querySnapshot.getDocuments();
for(DocumentSnapshot docSnap : docsList){
String userID = docSnap.getString("originalPoster");
// getting user documents
Task<DocumentSnapshot> userTask = db.collection("Users").document(userID).get();
userTask.addOnCompleteListener(new OnCompleteListener<DocumentSnapshot>() {
@Override
public void onComplete(@NonNull Task<DocumentSnapshot> task) {
DocumentSnapshot userDoc = task.getResult();
String userID = userDoc.getId();
String firstName = userDoc.getString("first_name");
String surname = userDoc.getString("surname");
User userObject = new User(firstName, userID, surname);
// cant call my callback right here otherwise its called for every
// completed user fetch
}
});
// cant call my callback right here since its too early
}
}else if(task.isCanceled()){
System.out.println("Fetch failed!");
}
}
});
}
| |
doc_23537624 | When trying to install cx-Oracle
pip install cx-Oracle
Is there any way to make it work with python 3.11?
Full error:
(venv) PS C:\Users\XXXXX\Pruebas\sp-back-office-toolscoe> pip install cx-Oracle
Collecting cx-Oracle
Using cached cx_Oracle-8.3.0.tar.gz (363 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: cx-Oracle
Building wheel for cx-Oracle (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for cx-Oracle (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
C:\Users\XXXXX\AppData\Local\Temp\pip-build-env-__332i7h\overlay\Lib\site-packages\setuptools\config\expand.py:144: UserWarning: File 'C:\\Users
\\XXXXXX\\AppData\\Local\\Temp\\pip-install-akmlg3ac\\cx-oracle_559c2c2b67a543f586a98b0333592264\\README.md' cannot be found
warnings.warn(f"File {path!r} cannot be found")
running bdist_wheel
running build
running build_ext
building 'cx_Oracle' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-bu
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cx-Oracle
Failed to build cx-Oracle
ERROR: Could not build wheels for cx-Oracle, which is required to install pyproject.toml-based projects
As the error suggests, I have installed Microsoft C++ Build Tools and added it to my path, but I still get the same error. Not sure what should I do.
| |
doc_23537625 | What I've reached so far:
DELETE FROM `sm_m2epro_listing_product_BACK` WHERE product_id IN (SELECT * FROM (SELECT a.product_id
FROM sm_m2epro_listing_product_BACK a
WHERE listing_id =8
GROUP BY a.product_id
HAVING COUNT( a.product_id ) > 1) qq);
But the query returns "NO rows affected" even if the duplicated rows are present.
A: This works for me, check this fiddle, after getting those repeated rows you need to keep one, I've used a column 'name' (you could use the primary key if you have one) to discriminate a row, you should use a column that is not repeated in two rows with same product_id value:
DELETE FROM `sm_m2epro_listing_product_BACK`
WHERE product_id IN (SELECT product_id FROM (SELECT a.product_id
FROM sm_m2epro_listing_product_BACK a
WHERE listing_id =8
GROUP BY a.product_id
HAVING COUNT( a.product_id ) > 1) qq)
and name not in (select min(name) from (SELECT a.name
FROM sm_m2epro_listing_product_BACK a
WHERE listing_id =8
GROUP BY a.product_id
HAVING COUNT( a.product_id ) > 1) qq);
A: DELETE
FROM `sm_m2epro_listing_product_BACK`
WHERE listing_id = 8
and
id not in (select id
from (select min(id) as id
from `sm_m2epro_listing_product_BACK`
where listing_id = 8 group by id) t)
The command above finds the minimum id of each group and removes records not having the id in the return set. The command assumes you have id as primary key.
| |
doc_23537626 | I would appreciate help of any sort, including ideas besides using JavaScript.
A: Google has a number of services available to people who program using their Maps. Go to https://developers.google.com/maps/documentation/javascript/reference and check out the directions services. I'm not very familiar with them, but I'm guessing it's similar to their other services. You make a directions request object supplied with point a and point b, and it will send a message to Google asking for the appropriate directions, which will be returned in some sort of result object that you can use to show the way. Update point b each time the user clicks and resend the google request, and it should update the path. Check out the API and it shouldn't be too hard to get it working. As for alternatives to javascript, Google Maps is written all in javascript, so there really is no other way. But I know from experience, most of their supplied code works really well, so I bet you can get it working!
A: Following link will show the route between two points:
J2ME/Android/BlackBerry - driving directions, route between two locations
| |
doc_23537627 | error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation.
** BUILD FAILED **
The following build commands failed:
PhaseScriptExecution [CP]\ Check\ Pods\ Manifest.lock /var/root/Library/Developer/Xcode/DerivedData/myproject-dlxqvfatclxsixazxpmwbqvzqpta/Build/Intermediates.noindex/myproject.build/Release-iphonesimulator/myproject.build/Script-95C8104D52AAF5838850DC1B.sh
(1 failure)
xcodebuild: Command failed with exit code 65
My project not using plugins that requiere cocoapods. Anyway, my cocoapods is up to date.
pod --version
1.9.3
my Podfile (inside /platforms/ios)
# DO NOT MODIFY -- auto-generated by Apache Cordova
platform :ios, '10.0'
target 'myproject' do
project 'Obra Social Camioneros Santa FeÌ.xcodeproj'
end
I tried remove Pod folder and then run in Terminal 'pod install', but when build I still getting that error.
What is wrong?
A: Fixed removed and then added platform ios.
Thanks for yours answers.
| |
doc_23537628 | Using Apache in Linode server.
| |
doc_23537629 | I managed to find this blog post which has outlined the process quite nicely:
Achieving POCOs in LINQ To SQL
I have the managed to get the retrieval of records to objects working properly, however, due to the nested nature of my my model, I can't seem to get addition working for the child objects. That is, if I create a child object, and set the reference to the desired parents object, LINQ to SQL still throws an exception stating the child's reference to the parent is null. If I attempt to add a plain old parent object, it succeeds, but adding child objects directly fails
Here is my failing test:
[Test]
public void AddSelectionShouldAddSelectionToMarket()
{
Market market = (Market) new Repository().GetMarket(1);
Selection selection = new Selection();
selection.Market = market;
new Repository().AddSelection(selection);
Assert.IsTrue(selection.SID > 0);
}
Here is the error message:
System.InvalidOperationException: An attempt was made to remove a relationship between a Market and a Selection. However, one of the relationship's foreign keys (Selection.MID) cannot be set to null.
The relevant parts of the 2 objects:
[DataContract]
public class Selection : ISelection
{
private int mID;
[DataMember]
public int MID
{
get { return this.mID; }
set { this.mID = value; }
}
private Market market;
[DataMember]
public Market Market
{
get { return this.market; }
set
{
this.market = value;
this.mID = value.MID;
}
}
}
[DataContract]
public class Market : IMarket
{
private int mID;
[DataMember]
public int MID
{
get { return this.mID; }
protected set { this.mID = value; }
}
private List<Selection> selections;
[DataMember]
public List<Selection> Selections
{
get { return this.selections; }
set
{
this.selections = value;
// For LINQ
foreach (Selection selection in selections)
{
selection.MID = mID;
selection.Market = this;
}
}
}
}
My DA code:
MarketsDataContext context = new MarketsDataContext();
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<Selection>(s => s.Prices);
options.LoadWith<Market>(m => m.Selections);
context.LoadOptions = options;
return context;
and;
public void AddSelection(ISelection selection)
{
using (MarketsDataContext context = MarketsDataContext.GetContext())
{
context.Selections.InsertOnSubmit((Selection) selection);
context.SubmitChanges();
}
}
And finally my XML mapping:
<Table Name="dbo.Markets" Member="Markets">
<Type Name="Market">
<Column Name="MID" Member="MID" Storage="mID" DbType="Int NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" />
<Association Name="FK_Market-Selections" Member="Selections" Storage="selections" ThisKey="MID" OtherKey="MID" DeleteRule="NO ACTION" />
</Type>
</Table>
<Table Name="dbo.Selections" Member="Selections">
<Type Name="Selection">
<Column Name="SID" Member="SID" Storage="sID" DbType="Int NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" />
<Column Name="MID" Member="MID" Storage="mID" DbType="Int NOT NULL" />
<Association Name="FK_Market-Selections" Member="Market" Storage="market" ThisKey="MID" OtherKey="MID" IsForeignKey="true" />
</Type>
</Table>
So, can anyone point me in the right direction? I've been searching for hours...
Edit:
Here's my stacktrace for my test failure:
at System.Data.Linq.ChangeTracker.StandardChangeTracker.StandardTrackedObject.SynchDependentData()
at System.Data.Linq.ChangeProcessor.ValidateAll(IEnumerable`1 list)
at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges()
at BetMax.DataModel.Repository.AddSelection(ISelection selection) in Repository.cs: line 68
at BetMax.DataModel.Test.ModelTest.AddSelectionShouldAddSelectionToMarket() in ModelTest.cs: line 65
And my GetMarket method:
public IMarket GetMarket(int MID)
{
Market market;
using (MarketsDataContext context = MarketsDataContext.GetContext())
{
market = context.Markets.Single(m => m.MID == MID);
}
return market;
}
Edit 2:
Well, adding
DeleteOnNull="true"
to Selections foreign key in the XML mapping has removed the foreign key error, but now I'm getting a null reference on one of Selections's child objects, saying its reference to Selection is null even though Selection is being initialised with none of its variables set (outside the foreign keys). I even tried creating a child object, and set its references correctly but am still getting this error:
System.NullReferenceException: Object reference not set to an instance of an object.
at BetMax.DTO.Price.set_Selection(Selection value) in Price.cs: line 25
at System.Data.Linq.Mapping.PropertyAccessor.Accessor`3.SetValue(ref T instance, V value)
at System.Data.Linq.Mapping.MetaAccessor`2.SetBoxedValue(ref Object instance, Object value)
at System.Data.Linq.ChangeProcessor.ClearForeignKeysHelper(MetaAssociation assoc, Object trackedInstance)
at System.Data.Linq.ChangeProcessor.ClearForeignKeyReferences(TrackedObject to)
at System.Data.Linq.ChangeProcessor.PostProcessUpdates(List`1 insertedItems, List`1 deletedItems)
at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges()
at BetMax.DataModel.Repository.AddSelection(ISelection selection) in Repository.cs: line 68
at BetMax.DataModel.Test.ModelTest.AddSelectionShouldAddSelectionToMarket() in ModelTest.cs: line 69
Price is another object, constructed in the same that that Selection is related to Market (1 selection has many prices, 1 market has many selections) etc etc.
A: I guess the problem is in your test method. You created a Repository with a DataContext but you did your submits with another one.
[Test]
public void AddSelectionShouldAddSelectionToMarket()
{
Market market = (Market) new Repository().GetMarket(1);
Selection selection = new Selection();
selection.Market = market;
new Repository().AddSelection(selection);
Assert.IsTrue(selection.SID > 0);
}
Create a Repository and use it in the test method.
[Test]
public void AddSelectionShouldAddSelectionToMarket()
{
Repository repository = new Repository();
Market market = (Market) repository.GetMarket(1);
Selection selection = new Selection();
selection.Market = market;
repository.AddSelection(selection);
Assert.IsTrue(selection.SID > 0);
}
A: Just a guess but it might be here
public Market Market
{
get { return this.market; }
set
{
this.market = value;
this.mID = value.MID;
}
}
What happens when the value you set to Market is null? The last line of that will be invalid since it wont be able to resolve null.MID. Maybe you need this for your setter:
set
{
this.market = value;
this.mID = (value == null) ? null : value.MID;
}
Also your MID would have to be nullable
int? MID
A: For your new issue; The problem occurs on null assignment to selection property of Price. Did you do that by your code? Could you again give the code part that you got the exception? I mean assignment to Price entity...
Edit according to comment:
I guess it is because of null control exception as we mentioned before on GeekyMonkeys post. In initialization of Selection class the Price property needs to set as null but when null is assigned to, it throws null reference. So you have to do a null control in set of price property.
private List<Price> prices
[DataMember]
public List<Price> Prices
{
get { return this.prices; }
set
{
if(value != null)
{
this.pricess = value;
// For LINQ
foreach (Price price in prices)
{
price.MID = mID;
price.Selection = this;
}
}
}
}
A: I know it's been a while and you've probably already resolved the issue, but maybe not...
I'm assuming that your data structure is similar to this:
Market
======
Market_ID int not null identity (1, 1)
Selection
=========
Selection_ID int not null identity (1, 1)
Market_ID int (FK to Market)
Selection_Name varchar(50)
To add a new Market and a new Selection simultaneously:
Selection selection = new Selection();
Market market = new Market();
market.Selections.Add(selection);
DataContext.Markets.InsertOnSubmit(market);
DataContext.SubmitChanges();
To add a new Selection to an existing Market:
Selection selection = new Selection();
Market market = DataContext.Markets.Where(a => a.Market_ID == 7).Single();
market.Selections.Add(selection);
DataContext.SubmitChanges();
To update the first Selection in a Market:
Selection selection = DataContext.Markets.Where(a => a.Market_ID == 7).Selections.First();
selection.Selection_Name = "New Name";
DataContext.SubmitChanges();
A: I'd suggest sending your code to Sidar Ok. He's a nice guy and will point you in the right direction. Or at least post a comment on his blog pointing him to your question here.
| |
doc_23537630 | import threading
import inspect
Class doStuff():
def __init__(self, somePropertyFromAnotherClass):
self.lock = threading.Lock()
self.prop = somePropertyFromAnotherClass
def doCoolThreadingStuff():
print("do threading stuff with {}".format(self.prop))
def someDecorator(cls):
def wrapper(cls):
print(inspect.getrgspec(cls.__init__))
#ds = doStuff() ## this is the bit that i can't figure out!
wrapper(cls)
return cls
Class A():
def __init__(self):
self.obj = "i'm an object"
@someDecorator
Class B():
def __init__(self, obj):
self.obj = obj
def doSomethingWithObj():
print('doing something with obj')
if __name__ == "__main__":
a = A()
b = B(a)
A: The decorator was ill-formed
def someDecorator(cls_obj):
def wrapper(*args):
ds = doStuff(args[0]) # positional : /
ds.doCoolThreadingStuff()
return cls_obj(*args)
return wrapper
I am able to use the above decorator definition to access an instance of a class A (that is passed to class B)
| |
doc_23537631 | E: Unable to locate package python-pip
The command '/bin/sh -c apt-get install -y python-pip python-dev build-essential' returned a non-zero code: 100
My Dockerfile is:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ['python']
CMD ['app.py']
I've tried to use these commands before installing python-pip, but it didn't help:
RUN apt-get install -y software-properties-common
RUN add-apt-repository universe
A: You have to use package python3-pip. Your Dockerfile can look like:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python3-pip python-dev build-essential
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ['python']
CMD ['app.py']
Better option is to use directly Python image:
FROM python:3
RUN apt-get update -y && apt-get install -y build-essential
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ['python']
CMD ['app.py']
| |
doc_23537632 | <li ng-repeat="task in tasks | taskPriority:this | filter:taskname as ukupno track by $index">
<input class="edit_task" ng-blur="editTask(task.id, task.task)" ng-model="task.task" value="{{ task.task }}" type="text">
</li>
Filter checks if there is priority level (1,2,3) in array fSPriorites matching priority in task, if yes then it returns those tasks.
angular.module('TaskFilter', []).filter('taskPriority', function() {
return function(task, scope) {
var filtered = [];
angular.forEach(task, function(task) {
if($.inArray(task.priority, scope.fSPriorites) != -1)
filtered.push(task);
});
return filtered;
};
});
How do i debug this?
A: I don't understand why you are passing scope to fileter. This is not a good practice. Try below code.
HTML
<li ng-repeat="task in tasks | taskPriority:this.fSPriorites | filter:taskname as ukupno track by $index">
<input class="edit_task" ng-blur="editTask(task.id, task.task)" ng-model="task.task" value="{{ task.task }}" type="text">
</li>
Filter
angular.module('TaskFilter', []).filter('taskPriority', function() {
return function(task, fSPriorites) {
var filtered = [];
angular.forEach(task, function(task) {
if($.inArray(task.priority, fSPriorites) != -1)
filtered.push(task);
});
return filtered;
};
});
Hope this could help you.Thanks,
| |
doc_23537633 | Is there any way to increase default pod to 250 so that I can run all the versions on a single instance?
A: You can set the MaxPods field in the kubelet config file:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
MaxPods: 250
You can then supply the config file to the kubelet binary with the --config flag, for example:
kubelet --config my-kubelet-config.yaml
Alternatively, the kubelet binary also has a --max-pods flag that allows you to set the value directly. However, this flag is deprecated and the use of a config file, as shown above, is recommended. See the kubelet reference.
| |
doc_23537634 | the logs start to show the error in the screenshot below
the components are:
import React, { Component } from "react";
import axios from "axios";
import Comments from "../components/comments";
class Article extends Component {
constructor(props) {
super(props);
this.state = {
title: "",
error: "",
comment: ""
};
}
componentDidMount() {
this.getComments();
}
getComments = () => {
const {
match: { params }
} = this.props;
return axios
.get(`/articles/${params.id}/comments`, {
headers: {
Accept: "application/json",
"Content-Type": "application/json",
}
})
.then(response => {
return response.json();
})
.then(response => this.setState({ comments: response.comments }))
.catch(error =>
this.setState({
error
})
);
};
render() {
return (
<div>
{this.state.title}
<div>
<h2>Comments</h2>
<Comments
getComments={this.getComments}
/>
</div>
</div>
);
}
}
export default Article;
and Comments component
import React, { Component } from "react";
import PropTypes from "prop-types";
import Comment from "./comment";
import axios from "axios";
import Article from "../screens/article";
class Comments extends Component {
constructor(props) {
super(props);
this.state = {
comments: [],
comment: "",
error: ""
};
this.load = this.load.bind(this);
this.comment = this.comment.bind(this);
}
componentDidMount() {
this.load();
}
load() {
return this.props.getComments().then(comments => {
this.setState({ comments });
return comments;
});
}
comment() {
return this.props.submitComment().then(comment => {
this.setState({ comment }).then(this.load);
});
}
render() {
const { comments } = this.state;
return (
<div>
{comments.map(comment => (
<Comment key={comment.id} commment={comment} />
))}
</div>
);
}
}
export default Comments;
so, I've tried to pass it by props, and set the state on comments component.
and instead of use just comments.map I've tried to use this.state but show the same error in the logs.
So, someone please would like to clarify this kind of issue?
seems pretty usual issue when working with react.
A: If an error occurs you do:
.catch(error => this.setState({ error }) );
which makes the chained promise resolve to undefined and that is used as comments in the Comments state. So you have to return an array from the catch:
.catch(error => {
this.setState({ error });
return [];
});
Additionally it woupd make sense to not render the Comments child at all if the parents state contains an error.
A: The other way is checking whether it’s an array and if so check it’s length and then do .map. You have initialized comments to empty array so we don’t need to check whether it’s an array but to be on safer side if api response receives an object then it will set object to comments so in that case comments.length won’t work so it’s good to check whether it’s an array or not.
Below change would work
<div>
{Array.isArray(comments) && comments.length>0 && comments.map(comment => (
<Comment key={comment.id} commment={comment} />
))}
</div>
A: The first time the comments component renders there was no response yet so comments were undefined.
import React, { Component } from "react";
import PropTypes from "prop-types";
import Comment from "./comment";
import axios from "axios";
import Article from "../screens/article";
class Comments extends Component {
constructor(props) {
super(props);
this.state = {
comments: [],
comment: "",
error: ""
};
this.load = this.load.bind(this);
this.comment = this.comment.bind(this);
}
componentDidMount() {
this.load();
}
load() {
return this.props.getComments().then(comments => {
this.setState({ comments });
return comments;
});
}
comment() {
return this.props.submitComment().then(comment => {
this.setState({ comment }).then(this.load);
});
}
render() {
const { comments } = this.state;
if (!comments) return <p>No comments Available</p>;
return (
<div>
{comments.map(comment => (
<Comment key={comment.id} commment={comment} />
))}
</div>
);
}
}
export default Comments;
| |
doc_23537635 | I was wondering if there's any specific way to "dump" the input given to alt-ergo (assuming alt-ergo is invoked from frama-c; i.e. not interop)?
I'd like to see how proof obligations of C programs' properties are encoded in alt-ergo's "native" input language. Any assistance would be much appreciated.
A: The option -wp-out <dir> allows you to select <dir> as the directory where generated files will be put. These files are sorted in subdirectories according to the memory model in use (typed by default). For Alt-Ergo, you should find files ending in .ergo containing only the proof obligation, and files ending in _Alt-Ergo.mlw containing the full context of the proof obligation (including axioms defining the arithmetic and memory models).
Note however that the upcoming Frama-C 20.0 Calcium is introducing the use of Why3's API for communicating with the provers, and that as a result the native Alt-Ergo (and Coq) outputs are slowly being deprecated.
| |
doc_23537636 | a top panel and buttom pannel wraps a scroll viewer that get filled with data after screen activate.
The top panel works fine, but when I try opening the bottom panel I get a divide by zero exception cause mContentHeight is 0 (inside panel code)
any suggestions?
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
<org.miscwidgets.widget.Panel
android:id="@+id/topPanel"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
panel:animationDuration="1000"
panel:content="@+id/top_content"
panel:handle="@+id/panelHandle"
panel:linearFlying="true"
android:paddingBottom="4dip"
panel:position="top" >
<Button
android:id="@+id/panelHandle"
android:layout_width="fill_parent"
android:layout_height="0.1dip" />
<LinearLayout
android:id="@+id/top_content"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:orientation="horizontal" >
<ImageView
android:id="@+id/imageView2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@drawable/logo" />
</LinearLayout>
</org.miscwidgets.widget.Panel>
<ScrollView
android:id="@+id/deal_box_streaps_scroller"
android:layout_width="fill_parent"
android:layout_height="fill_parent" >
<LinearLayout
android:id="@+id/deal_box_streaps_container"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:orientation="vertical" >
</LinearLayout>
</ScrollView>
<org.miscwidgets.widget.Panel
android:id="@+id/buttomPanel"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
panel:animationDuration="1200"
panel:content="@+id/buttom_content"
panel:handle="@+id/panelHandle2"
android:paddingTop="4dip"
panel:position="bottom" >
<Button
android:id="@+id/panelHandle2"
android:layout_width="fill_parent"
android:layout_height="120dip" />
<LinearLayout
android:id="@+id/buttom_content"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:orientation="horizontal" >
<ImageView
android:id="@+id/imageView21"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:src="@drawable/logo" />
</LinearLayout>
</org.miscwidgets.widget.Panel>
</LinearLayout>
| |
doc_23537637 | How to prevent this and make the user to re-enter the login credentials?
Also from this, I can say that client browser stores the url along with the query parameters.
How to avoid the browser from saving/caching my login credentials?
Thanks in advance! :)
<div class="container">
<form action="Login" method="get">
<h1>Login</h1>
<label>Enter your username </label>
<input type="text" placeholder="username" name="username"/><br /><br />
<label>Enter your password </label>
<input type="text" placeholder="password" name="password"/><br /><br />
<button type="submit" value="Login">Submit</button>
</form>
</div>
QueryString with Login Credentials : http://localhost:8080/DemoApp/Login?username=karthik&password=karthik123
A: Never send any credentials via URL parameters of a GET request for multiple reasons:
*
*Caching of URLs is always allowed in the browser. If you leave your browser unattended for a moment it may leak your credentials.
*All infrastructure elements (firewalls, proxies) along the way are always allowed to log URLs for debug purposes. Credentials may leak because someone turned logging on.
Secrets are allowed to be passed via headers or body of requests. Please use POST request to send the credentials. With a bit of luck this should solve your problem.
| |
doc_23537638 | Here is my code:
public static void Initialize()
{
if (!isInitialized)
{
isInitialized = true;
Thread t = new Thread( new ThreadStart(SetProperties));
t.Start();
}
}
public static void SetProperties()
{
//The next line is where the NullReferenceException is pointing to
OleDbConnection conn = new OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source='" + System.Web.HttpContext.Current.Server.MapPath("cms.accdb") + "';");
using (conn)
{ ....
Any help, please? Thanks
A: In your new thread, you have no access to System.Web.HttpContext.Current object.
You can replace
System.Web.HttpContext.Current.Server.MapPath("cms.accdb")
With
HostingEnvironment.MapPath("cms.accdb")
More info about it here:
HostingEnvironment.MapPath
| |
doc_23537639 | Now i am getting the exception
java.io.InvalidClassException: com.navtech.kernel.flat.FlatValidationException; local class incompatible: stream classdesc serialVersionUID = -6871353730928221293, local class serialVersionUID = -5086279873877116405L.
Whats wrong here.
A: This is to be expected. The serialVersionUID is used to differentiate between different versions of the same Serializable class. Most likely there is a good reason for the change in the version, and it really cannot be deserialized as the new version.
A: When an object is serialized, the serialVersionUID is serialized along with the other contents.
Later when that is deserialized, the serialVersionUID from the deserialized object is extracted and compared with the serialVersionUID of the loaded class.
The numbers are not matching so this exception.
To fix this issue:- Serialize the class with new serialVersionUID before deserailization.
A: Incompatible Changes that are specified in the Doc:
Incompatible changes to classes are those changes for which the guarantee of interoperability cannot be maintained. The incompatible changes that may occur while evolving a class are:
*
*Deleting fields - If a field is deleted in a class, the stream written will not contain its value. When the stream is read by an earlier class, the value of the field will be set to the default value because no value is available in the stream. However, this default value may adversely impair the ability of the earlier version to fulfill its contract.
*Moving classes up or down the hierarchy - This cannot be allowed since the data in the stream appears in the wrong sequence.
*Changing a nonstatic field to static or a nontransient field to
transient
When relying on default serialization, this change is equivalent to deleting a field from the class. This version of the class will not write that data to the stream, so it will not be available to be read by earlier versions of the class. As when deleting a field, the field of the earlier version will be initialized to the default value, which can cause the class to fail in unexpected ways.
*Changing the declared type of a primitive field
Each version of the class writes the data with its declared type. Earlier versions of the class attempting to read the field will fail because the type of the data in the stream does not match the type of the field.
*Changing the writeObject or readObject
method so that it no longer writes or reads the default field data or changing it so that it attempts to write it or read it when the previous version did not. The default field data must consistently either appear or not appear in the stream.
*Changing a class from Serializable to Externalizable or vice versa
is an incompatible change since the stream will contain data that is incompatible with the implementation of the available class.
*Changing a class from a non-enum type to an enum type or vice versa
since the stream will contain data that is incompatible with the implementation of the available class.
*Removing either Serializable or Externalizable is an incompatible
change
since when written it will no longer supply the fields needed by older versions of the class.
*Adding the writeReplace or readResolve method
to a class is incompatible if the behavior would produce an object that is incompatible with any older version of the class.
In your case there is a chance of having issue with moving the class hierarchy because when there is inheritance problem the class which has its own specific serialUID will not overrided in the stream that already generated.
It is strongly recommended that all serializable classes explicitly declare serialVersionUID values, since the default serialVersionUID computation is highly sensitive to class details that may vary depending on compiler implementations, and can thus result in unexpected InvalidClassExceptions during deserialization.
hope this helps!
| |
doc_23537640 | There is two case.
Case_1: Input JSON does have "_source" fields and Output is Null
Case_2: Input JSON does not have "_source" fields and Output has searched values for the entire fields in ES
The case_1 has "_source": [" coreid ", " program_id " ],
{
"_source": [" coreid ", " program_id " ],
"query": {
"bool": {
"should": [
{
"bool":
{
"must": [
{"match": {"tu_tm": { "query": "tu" } } },
{"match": {"program_id": {"query": "86328" } } }
]
}
},
{
"bool":
{
"must": [
{"match": {"tu_tm": {"query": "tu" } } },
{"match": {"program_id": {"query": "86330" } } }
]
}
},
{
"bool": {
"must": [
{
"match": {
"tu_tm": {
"query": "tu"
}
}
},
}
}
]
}
}
}
The output has "_source": {}
{
"took": 7,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 1.000008,
"hits": [
{
"_index": "matching_tool",
"_type": "data",
"_id": "THcc2msB1g08C8plFbE0",
"_score": 1.000008,
"_source": {}
}
A: Change your _source definition from
"_source": [" coreid ", " program_id " ],
to
"_source": ["coreid", "program_id" ],
Surrounding white spaces for field ids are unnecessary.
Hope that helps.
| |
doc_23537641 | Here it is
51.772425|0.00|21.33|0.00|5000|51.772425|0
I want to intercept it with Scrapy, but instead of getting just this little piece of string I got the whole page.
'NJGroup123390' It's the ID of select tag
Here's my code:
def after_login(self, response):
return Request(url='https://****.com/NexJobPage.asp?Id=445',
callback=self.parse_form)
def parse_form(self, response):
return [FormRequest.from_response(response,
formdata={'NJGroup123390':'5000'},
dont_click=True,
callback=self.parse_form2)]
# here I should have the response returned by AJAX: 51.772425|0.00|21.33|0.00|5000|51.772425|0
def parse_form2(self, response):
f = open('logo2', 'wb')
f.write(response.body)
f.close()
Thanks
A: You might have missing an additional argument or header added through javascript. Inspect the request sent in your browser, check for missing parameters, headers or cookies and add them to your request object.
You can use the shell to see what is the data filled by FormRequest:
$ scrapy shell https://stackoverflow.com/users/signup
2014-02-12 19:38:12-0400 [scrapy] INFO: Scrapy 0.22.1 started (bot: scrapybot)
...
In [1]: from scrapy.http import FormRequest
In [2]: req = FormRequest.from_response(response, formnumber=1)
In [3]: import urlparse
In [4]: urlparse.parse_qs(req.body, True)
Out[4]:
{'display-name': [''],
'email': [''],
'fkey': ['324799e03d5f73e1af72134e6d943f58'],
'password': [''],
'password2': [''],
'submit-button': ['Sign Up']}
| |
doc_23537642 | My delphi wrapper is calling functions from c++ dll.
This is C++ code:
typedef enum EFTDeviceControlAction
{
EFT_DCA_CR_CARD_RETRACT = 0x01,
EFT_DCA_CR_CARD_REPOSITION = 0x02,
EFT_DCA_CR_SHUTTER_OPEN = 0x03,
EFT_DCA_CR_SHUTTER_CLOSE = 0x04,
EFT_DCA_CR_CARD_EJECT = 0x05,
}
typedef enum EFT_PrintOptions {
poPrintState = 0,
poPrintFirst = 1,
poPrintSubsequent = 2,
poPrintFinal = 3,
poPrintAbort = 9
} EFT_PrintOptions;
typedef void * EFT_HANDLE;
int EFT_CreateSession(EFT_HANDLE * h);
int EFT_DestroySession(EFT_HANDLE h);
int EFT_ReadProperty(EFT_HANDLE h, int table, int index, char * pValue, unsigned int maxLength);
int EFT_WriteProperty(EFT_HANDLE h, int table, int index, char * pValue);
...
And this is delphi code :
EFTDeviceControlAction = (
EFT_DCA_CR_CARD_RETRACT = $01,
EFT_DCA_CR_CARD_REPOSITION = $02,
EFT_DCA_CR_SHUTTER_OPEN = $03,
EFT_DCA_CR_SHUTTER_CLOSE = $04,
EFT_DCA_CR_CARD_EJECT = $05,
);
EFT_PrintOptions = (
poPrintState = 0,
poPrintFirst = 1,
poPrintSubsequent = 2,
poPrintFinal = 3,
poPrintAbort = 9
);
EFT_HANDLE = pointer;
function EFT_CreateSession(var h: EFT_HANDLE): Integer; stdcall; external 'api.dll';
function EFT_DestroySession(h: EFT_HANDLE): Integer; stdcall; external 'api.dll';
function EFT_ReadProperty(h: EFT_HANDLE; table: Integer; index: Integer; pValue: PChar; maxLength: Cardinal): Integer; stdcall; external 'api.dll';
function EFT_WriteProperty(h: EFT_HANDLE; table: Integer; index: Integer; pValue: PChar): Integer; stdcall; external 'api.dll';
Problem that I have is the line (C++)
typedef void * EFT_HANDLE
How is this line defined in Delphi?
Is this a pointer, procedure ??? and what value do I use for parameter when I call function?
For every call I get Access violation at address 0040537B in module
A: typedef void * EFT_HANDLE;
The name of the declared type is EFT_HANDLE and it is an alias for void*. And void* is simply an untyped pointer.
So, in Delphi you define it like this:
type
EFT_HANDLE = Pointer;
Which is exactly what you already did.
The rest of your translations look basically quite reasonable. I have these comments:
*
*Are you sure the calling convention is stdcall? The C++ code you show does not specify a calling convention, and that invariably means cdecl.
*Use PAnsiChar rather than PChar so that your code is correct on Unicode Delphi as well as old non-Unicode Delphi.
The obvious place for an access violation is the null-terminated string. It would be helpful to see the code that you have which calls EFT_ReadProperty. It will need to look like this:
var
prop: AnsiString;
....
SetLength(prop, 128); // for example, not sure what value is needed here
retval := EFT_ReadProperty(handle, index, PAnsiChar(prop), Length(prop)+1);
// the +1 is for the null-terminator, but the library will specify exactly
// how that is handled and it could equally be that the +1 is omitted
| |
doc_23537643 | hello hello hello I am I am I am your string string string string of strings
Can I somehow find repetitive sub-strings delimited by spaces(EDIT)? In this case it would be 'hello', 'I am' and 'string'.
I have been wondering about this for some time but I still can not find any real solution.
I also have read some articles concerning this topic and hit up on suffix trees but can this help me even though I need to find every repetition e.g. with repetition count higher than two?
If it is so, is there some library for python, that can handle suffix trees and perform operations on them?
Edit: I am sorry I was not clear enough. So just to make it clear - I am looking for repetitive sub-strings, that means sequences in string, that, for example, in terms of regular expressions can be substituted by + or {} wildcards. So If I would have to make regular expression from listed string, I would do
(hello ){3}(I am ){3}your (string ){4}of strings
A: To find two or more characters that repeat two or more times, each delimited by spaces, use:
(.{2,}?)(?:\s+\1)+
Here's a working example with your test string: http://bit.ly/17cKX62
EDIT: made quantifier in capture group reluctant by adding ? to match shortest possible match (i.e. now matches "string" and not "string string")
EDIT 2: added a required space delimiter for cleaner results
| |
doc_23537644 | All is good, but I can see posts - no on else sees them.
How can I fix this?
Here is the data I send to FB API to create post as Page:
array(7) {
["message"]=>
string(2942) "Chevrolet is preparing to release its latest supercar model on the publicthe Corvette Stingray. As part of final testing and performance ratings, Corvette has enlisted a team of technical experts to put the car through its paces on the world-famous Nürbugring race course. This team is being headed up by Chevrolet Europes technical manager, Patrick Herrman, who is overseeing the two Corvette Stingray coupes that are undergoing this ultimate stress test.
The GM dynamics engineer, Jim Mero, had this to say about why Chevrolet chose to go overseas in testing their new model."
["picture"]=>
string(73) "http://test.iwsghost.com/wp-content/uploads/2014/06/corvette-stingray.jpg"
["name"]=>
string(115) "Chevrolet to Unleash All-New Corvette Stingray on Nürburgring | Nürburgring Lap Times [ nurburgringlaptimes.com ]"
["link"]=>
string(92) "http://nurburgringlaptimes.com/chevrolet-to-unleash-all-new-corvette-stingray-on-nurburgring"
["caption"]=>
string(50) "Nürburgring Lap Times [ nurburgringlaptimes.com ]"
["description"]=>
string(2942) "Chevrolet is preparing to release its latest supercar model on the publicthe Corvette Stingray. As part of final testing and performance ratings, Corvette has enlisted a team of technical experts to put the car through its paces on the world-famous Nürbugring race course. This team is being headed up by Chevrolet Europes technical manager, Patrick Herrman, who is overseeing the two Corvette Stingray coupes that are undergoing this ultimate stress test."
}
I use the permanent token to do this from my server. This token I get by this tutorial - What are the Steps to getting a Long Lasting Token For Posting To a Facebook Fan Page from a Server
And posted all data by this API Path - /{page-id}/feed
As a result I can see this posts, because I am page's administrator, and I see this as page.No one more can see it.
How can I set permissions to make it opened for everyone?
A: Your app must be in the development mode.
Until the app is in dev mode only the admins, developers and testers of the app can see the posts. You can switch your app to live in the settings-
Edit:
Before making your app live, you must get the permissions approved by facebook else nobody but dev/admins/testers will see the posts.
From v2.0 onwards, the permissions other than public_profile, email and the user_friends need to the submitted for review before you can make your app live; until then, only the testers/admin/developers of the app will be able to test app with those permissions.
See here for details on Login Submission.
| |
doc_23537645 |
A: Android N display Size through compute the screen width and height,to scale screen mode ,there is no way to fixed size of the Ui,but i think u could use custom view to do that,just scale the view like display image more and more big.
| |
doc_23537646 |
*
*Importing the framework
*Creating the model
In addition to set up the stack, I'm using the following code:
- (NSManagedObjectContext *) managedObjectContext {
if (managedObjectContext != nil) {
return managedObjectContext;
}
NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator];
if (coordinator != nil) {
managedObjectContext = [[NSManagedObjectContext alloc] init];
[managedObjectContext setPersistentStoreCoordinator: coordinator];
}
return managedObjectContext;
}
- (NSManagedObjectModel *)managedObjectModel {
if (managedObjectModel != nil) {
return managedObjectModel;
}
managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain];
return managedObjectModel;
}
-(NSPersistentStoreCoordinator *)persistentStoreCoordinator {
if (persistentStoreCoordinator != nil) {
return persistentStoreCoordinator;
}
NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"xxxxx.sqlite"]];
NSError *error = nil;
persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]];
if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) {
// Handle error
}
return persistentStoreCoordinator;
}
- (NSString *)applicationDocumentsDirectory {
return [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
}
But when I am trying to save data I get the following execption
Terminating app due to uncaught exception
NSInternalInconsistencyException, reason: 'This
NSPersistentStoreCoordinator has no persistent stores. It cannot
perform a save operation.'
A: This problem could be do to the fact you have run the application and then you have changed the model.
The simplest solution is to delete the application from the simulator/device, then performing a Clean, and trying again.
The more correct solution is to deal with light migration as suggested in I keep on getting "save operation failure" after any change on my XCode Data Model.
A simple suggestion is to enable Core Data logs as suggested in XCode4 and Core Data: How to enable SQL Debugging and see what is going under the hood.
Hope that helps.
| |
doc_23537647 | I wish to track the last actions of each person and also count the people online.
Can anyone provide examples of usage of the session database, or links to documentation?
I assumed that this table would be automatically managed by Laravel, but it appears I am incorrect.
A:
Can anyone provide examples of usage of the session database, or links to documentation? I assumed that this table would be automatically managed by Laravel, but it appears I am incorrect.
The only thing incorrect is your assumption of being incorrect. The session database is a storage engine for Laravel's session system. Once you've setup the database and configured Laravel to use the session database, you can use the syntax in the session docs to save and get data that's associated with each individual user of your web based system.
So, you have step 1 -- your database table created.
Step 2 would be configuring the storage engine by editing
app/config/session.php
and changing this
'driver' => 'file',
into this
'driver' => 'database'
Once you've done that any call to the session's put method (or other "saving data" methods) will store data in this table.
| |
doc_23537648 | at this point i cannot move to http reqest and i need to try to revise my interop.IWshRuntimeLibrary object
this code below getting url+parameters only -
IWshRuntimeLibrary.WshShell wh = new IWshRuntimeLibrary.WshShell();
object windowStyle = 1;
object waitOnReturn = false;
wh.Run(string.Format("{0} {1}", APP_BROWSER, sURL), ref windowStyle, ref waitOnReturn);
thanks,
Y.D.
| |
doc_23537649 | animation-timing-function: cubic-bezier(.27,.97,.86,1);
@keyframes back-y-spin {
0% { transform: rotateY(360deg); }
100% { transform: rotateY(0deg); }
}
I wonder how could I rotate it 2, 3, etc. times but apply timing function to entire rotation. For instance, if I specify animation-iteration-count: 2; carouser started, then going slower, then stops and then repeated - faster, slower, stopped.
What I want: carousel started, speed increased then rotated N times then speed decreased and it is stopped.
Here is example I worked with: https://codepen.io/anon/pen/OgeOEQ
A: Try using transform: rotate(calc(360deg * 3));. Example below.
.shape {
width: 100px;
height: 100px;
background: green;
position: absolute;
left: calc(50% - 50px);
top: calc(50% - 50px);
animation: rotate 5s;
animation-timing-function: cubic-bezier(.9,.1,.1,.9);
}
@keyframes rotate {
0% { transform: rotate(calc(360deg * 6)); }
100% { transform: rotate(0deg); }
}
<div class="shape"></div>
| |
doc_23537650 | He is the encryption part of the program:
KeyGenerator kg = KeyGenerator.getInstance("AES");
kg.init(128);
SecretKey key = kg.generateKey();
Cipher c = Cipher.getInstance("AES");
c.init(Cipher.ENCRYPT_MODE, key);
FileInputStream fis; FileOutputStream fos; CipherOutputStream cos;
fis = new FileInputStream("FileTo.encrypt");
fos = new FileOutputStream("Encrypted.file");
//write encrypted to file
cos = new CipherOutputStream(fos, c);
byte[] b = new byte[16];
int i = fis.read(b);
while (i != -1) {
cos.write(b, 0, i);
i = fis.read(b);
}
cos.close();
//write key to file
byte[] keyEncoded = key.getEncoded();
FileOutputStream kos = new FileOutputStream("crypt.key");
kos.write(keyEncoded);
kos.close();
Here's the decryption part:
//Load Key
FileInputStream fis2= new FileInputStream("a.key");
File f=new File("a.key");
long l=f.length();
byte[] b1=new byte[(int)l];
fis2.read(b1, 0, (int)l);
SecretKeySpec ks2=new SecretKeySpec(b1,"AES");
Cipher c1 = Cipher.getInstance("AES");
c1.init(Cipher.DECRYPT_MODE, ks2);
FileInputStream fis1=new FileInputStream("Encrypted.file");
CipherInputStream in= new CipherInputStream(fis1,c1);
FileOutputStream fos0 =new FileOutputStream("decrypted.file");
byte[] b3=new byte[1];
int ia=in.read(b3);
while (ia >=0)
{
c1.update(b3); //<-------remove this
fos0.write(b3, 0, ia);
ia=in.read(b3);
}
in.close();
fos0.flush();
fos0.close();
Now the problem is the decryption part is not decrypting the last bits, some bits are missing. It seems to me that it only decrypts every 16 bytes, but the variable in(cipherinputstream) returns -1 when it should be returning the last bytes.
How do I get the last bits?
Thanks in advance
Edited: Added comment to point out what has to be removed. Here's some code to properly (i.e., without loading the entire file in java) encrypt and decrypt a file in Java using AES. It's possible to add additional parameters (padding, etc.) but here's the basic code.
A: You just need to remove this line in your code and it'll work fine:
c1.update(b3);
Since you're using a CipherInputStream you don't need to update the Cipher manually. It handles that for you, and by calling it you're interfering with the decryption.
On a side note, for efficiency you should increase the size of your byte[] b and byte[] b3 arrays. Typically 8192 is a good size for buffering.
A: Here's some DES example code I dug up, which might be helpful... especially the calls to doFinal.
package forums;
import java.io.*;
import java.security.*;
import javax.crypto.*;
import javax.crypto.spec.*;
/**
This program tests the DES cipher. Usage:
java DESTest -genkey keyfile
java DESTest -encrypt plaintext encrypted keyfile
java DESTest -decrypt encrypted decrypted keyfile
*/
public class DESTest
{
private static void usage() {
System.err.print(
"This program tests the javax.crypto DES cipher package.\n"
+ "usage: java DESTest -genkey keyfile\n"
+ "java DESTest -encrypt plaintext encrypted keyfile\n"
+ "java DESTest -decrypt encrypted decrypted keyfile\n"
);
}
public static void main(String[] args) {
if ( args.length < 2 || args.length > 4
|| !args[0].matches("-genkey|-encrypt|-decrypt")
) {
usage();
return;
}
try {
if ("-genkey".equals(args[0])) {
KeyGenerator keygen = KeyGenerator.getInstance("DES");
SecureRandom random = new SecureRandom();
keygen.init(random);
SecretKey key = keygen.generateKey();
ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream(args[1]));
out.writeObject(key);
out.close();
} else {
int mode;
if ("-encrypt".equals(args[0])) {
mode = Cipher.ENCRYPT_MODE;
} else { //-decrypt
mode = Cipher.DECRYPT_MODE;
}
ObjectInputStream keyIn = new ObjectInputStream(new FileInputStream(args[3]));
Key key = (Key) keyIn.readObject();
keyIn.close();
InputStream in = new FileInputStream(args[1]);
OutputStream out = new FileOutputStream(args[2]);
Cipher cipher = Cipher.getInstance("DES");
cipher.init(mode, key);
crypt(in, out, cipher);
in.close();
out.close();
}
} catch (IOException exception) {
exception.printStackTrace();
} catch (GeneralSecurityException exception) {
exception.printStackTrace();
} catch (ClassNotFoundException exception) {
exception.printStackTrace();
}
}
/**
Uses a cipher to transform the bytes in an input stream
and sends the transformed bytes to an output stream.
@param in the input stream
@param out the output stream
@param cipher the cipher that transforms the bytes
*/
public static void crypt(InputStream in, OutputStream out, Cipher cipher)
throws IOException, GeneralSecurityException
{
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
byte[] inBytes = new byte[blockSize];
byte[] outBytes = new byte[outputSize];
int inLength = 0;;
boolean more = true;
while (more) {
inLength = in.read(inBytes);
if (inLength == blockSize) {
int outLength = cipher.update(inBytes, 0, blockSize, outBytes);
out.write(outBytes, 0, outLength);
System.out.println(outLength);
} else {
more = false;
}
}
if (inLength > 0) {
outBytes = cipher.doFinal(inBytes, 0, inLength);
} else {
outBytes = cipher.doFinal();
}
System.out.println(outBytes.length);
out.write(outBytes);
}
}
A: import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
public class AESTest {
public static String asHex (byte buf[]) {
StringBuffer strbuf = new StringBuffer(buf.length * 2);
int i;
for (i = 0; i < buf.length; i++) {
if (((int) buf[i] & 0xff) < 0x10)
strbuf.append("0");
strbuf.append(Long.toString((int) buf[i] & 0xff, 16));
}
return strbuf.toString();
}
public static void main(String[] args) throws Exception {
String keyString = "ssssssssssssssss";
// 546578746F2070617261207465737465 (Hex)
byte[] key = keyString.getBytes();
System.out.println(asHex(key).toUpperCase());
String clearText = "sdhhgfffhamayaqqqaaaa";
// ZXNzYXNlbmhhZWhmcmFjYQ== (Base64)
// 6573736173656E686165686672616361 (Hex)
byte[] clear = clearText.getBytes();
System.out.println(asHex(clear).toUpperCase());
SecretKeySpec skeySpec = new SecretKeySpec(key, "AES");
// PKCS5Padding or NoPadding
Cipher cipher = Cipher.getInstance("AES/ECB/NoPadding");
cipher.init(Cipher.ENCRYPT_MODE, skeySpec);
byte[] encrypted = cipher.doFinal(clear);
System.out.println(asHex(encrypted).toUpperCase());
cipher.init(Cipher.DECRYPT_MODE, skeySpec);
byte[] original =
cipher.doFinal(encrypted);
System.out.println(original);
String originalString = new String(original);
System.out.println("Original string: " +
originalString + " " + asHex(original));
}
}
| |
doc_23537651 | Some code to get your head around what I've tried.
import React from "react";
import axios from "axios"
function App() {
const fetchLongRequest = async () => {
try{
// All peachy over here if no timeout is implemented...
const myRequest = await axios({
url: "https://jsonplaceholder.typicode.com/todos/1",
headers: {
accept: "application/json",
"Content-Type": "application/json"
},
})
console.log("SUCCESS!", JSON.stringify(myRequest.data, null, 2))
}catch(error){
console.log("FAIL!", error.message)
}
}
return (
<button onClick={() => fetchLongRequest()}>Fetch</button>
);
}
export default App;
now this is my introduction of the timeout
import React from "react";
import axios from "axios";
function App() {
const fetchLongRequest = async () => {
// timeout works as expected but I'd like to let the call go to the backend and do its thing.
try {
const myRequest = await axios({
url: "https://jsonplaceholder.typicode.com/todos/1",
headers: {
accept: "application/json",
"Content-Type": "application/json",
},
timeout: 1,
});
console.log("SUCCESS!", JSON.stringify(myRequest.data, null, 2));
} catch (error) {
console.log("FAIL!", error.message);
}
};
return <button onClick={() => fetchLongRequest()}>Fetch</button>;
}
export default App;
I know the request is a bit odd as it opens many questions such as error handling, how to know when this call is done, etc. I'd like to get some feedback in how I can achieve this task...please :)
A: All you need is a timeout set BEFORE the request
import React from "react";
import axios from "axios";
function App() {
const fetchLongRequest = async () => {
const waitTime = 5000;
setTimeout(() => console.log("Request taking a long time"), waitTime);
try {
const result = await axios({
url: "https://jsonplaceholder.typicode.com/todos/1",
headers: {
accept: "application/json",
"Content-Type": "application/json",
}
});
console.log("SUCCESS!", JSON.stringify(result.data, null, 2));
} catch(error) {
console.log("FAIL!", error.message);
}
};
return <button onClick = {() => fetchLongRequest()}>Fetch </button> ;
}
export default App;
The original solutions below are total overkill!!
I think this will do what you want, uses Promise.race
note: this is still not quite right as far as error handling goes
the handleError function is purely so if a the request fails before the timeout the failure isn't output twice
import React from "react";
import axios from "axios";
function App() {
const fetchLongRequest = async () => {
const waitTime = 5000;
const handleError = error => {
// this makes sure that the FAIL output isn't repeated in the case when there's a failure before the timeout
if (!error.handled) {
if (error.timedout) {
console.log("TIMEDOUT", error.timedout);
} else {
console.log("FAIL!", error.message);
error.handled = true;
throw error;
}
}
};
const makeRequest = async () => {
try {
const result = await axios({
url: "https://jsonplaceholder.typicode.com/todos/1",
headers: {
accept: "application/json",
"Content-Type": "application/json",
}
});
console.log("SUCCESS!", JSON.stringify(result.data, null, 2));
} catch(error) {
return handleError(error);
}
};
const timer = new Promise((_, reject) => setTimeout(reject, waitTime, {timedout: "request taking a long time"}));
try {
await Promise.race([makeRequest(), timer]);
} catch(error) {
handleError(error);
}
};
return <button onClick = {() => fetchLongRequest()}>Fetch </button> ;
}
export default App;
As a side note, this code is far cleaner without async/await - though, to be fair, I'm not as fluent using async/await as I am with Promises alone - I've used Promises since before there was a .catch :p
non async/await implementation
import React from "react";
import axios from "axios";
function App() {
const fetchLongRequest = () => {
const waitTime = 5000;
const handleError = error => {
// this makes sure that the FAIL output isn't repeated in the case when there's a failure before the timeout
if (!error.handled) {
if (error.timedout) {
console.log("TIMEDOUT", error.timedout);
} else {
console.log("FAIL!", error.message);
error.handled = true;
throw error;
}
}
};
const myRequest = axios({
url: "https://jsonplaceholder.typicode.com/todos/1",
headers: {
accept: "application/json",
"Content-Type": "application/json",
}
}).then(result => {
console.log("SUCCESS!", JSON.stringify(result.data, null, 2));
}).catch(handleError);
const timer = new Promise((_, reject) => setTimeout(reject, waitTime, {timedout: "request taking a long time"}));
return Promise.race([myRequest, timer]).catch(handleError);
};
return <button onClick = {() => fetchLongRequest()}>Fetch </button> ;
}
export default App;
Of course "cleaner" is just my opinion
A: Axios API call is promise-based so you could simply use then and catch block to perform some tasks on completion without using await then it'll simply run in background without blocking client-side. Using timeout for such scenario is not permissible because on a slow network it may take a minute to complete and once your timeout completed it'll will block client-side. Instead of using await which will block your request remove it and it simply run asynchronously which I think you wanna achieve.
const myRequest = axios({
url: "https://jsonplaceholder.typicode.com/todos/1",
headers: {
accept: "application/json",
"Content-Type": "application/json"
},
})
| |
doc_23537652 |
Must declare the scalar variable "@student_id".
Can I get some help understanding what possibly could be causing this?
CREATE TABLE TABLE2
(
TYPE1 nvarchar(256),
TABLE_ID nvarchar(256)
);
DECLARE @student_id INT
SET @student_id = 1;
WHILE @student_id <= (SELECT COUNT(*) FROM STUDENT)
BEGIN
DECLARE @area NVARCHAR(256)
SET @area = (SELECT AREA
FROM
(SELECT
ROW_NUMBER() OVER(ORDER BY AREA ASC) AS NB_ROW,
AREA
FROM
STUDENT) AS TEMP
WHERE NB_ROW = @student_id)
INSERT INTO TABLE2 (TYPE1, TABLE_ID)
SELECT
TYPE1, TABLE_ID
FROM
TABLE3
SET @student_id = @student_id + 1
END;
| |
doc_23537653 | encrypt de cookie data with md5, but I can not validate the hash back.
It has got to do, with the fact that cookie_data is a serialized array, because normal stringvalues work ok.
It's actually from a codeigniter class, but it does not work??
Does anyone know what the problem might be?
$hash = substr($session, strlen($session)-32);
$session= substr($session, 0, strlen($session)-32);
if ($hash !== md5($session.$this->encrypt_key))
{........
and the cookie value is encrypted like this
$cookie_data = $cookie_data.md5($cookie_data.$this->encrypt_key);
EDIT
I found that the answer is to use urlencode en urldecode in the proces of creating and validate
md5 hashes, because setcookie does urlencode automaticly, and thereby possibly changing the hash.
thanks, Richard
A: You have a typo:
md5($sessie.$this->encrypt_key))
should be
md5($session.$this->encrypt_key))
If you develop with notices turned on you'll catch this kind of thing much more easily.
You're not encrypting your data, you're signing it.
A: md5 is a oneway function. It is not a reversible one, so you can't decrypt the data.
The only thing you can do is encrypt the original data (if you saved it elsewhere) and check the result of this second computation.
If the value retrieved and the new value calculated are the same, the hash you received is valid (As you are doing in your code).
EDIT
You know, with just three lines of code I will guess some possible causes:
*
*$session doesn't contains at the beginning of your code the same value of cookie_data.
*you are using multibyte strings and strlen is not mb aware (use the idioms substr($session,0,-32) to get the payload part of the string.
*maybe substr doesn't cope with multibyte strings too, use explicitally mb_substr (or whatever it is called).
To me the first case is the more probable. For what I can see.
A:
I was attempting to encrypt de cookie
data with md5, but I can not decrypt
it back for validation.
md5 isnt an encryption method. it creates a one-way hash that cant be turned back into the original data.
If you want to encrypt data try mcrypt
| |
doc_23537654 | I have tried to build on my local machine and it work perfectly no error at all!!
But I have push my code to github and try to build my image on GCP CloudBuild and I got build failed with COPY failed: stat app/dist/apps/api: file does not exist error.
Error happened on build stage when I try to copy builded code from development stage to build stage.
There is my Dockerfile.
FROM node:18 As development
RUN apt-get update && apt-get install -y python
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
WORKDIR /app
COPY --chown=node:node . .
RUN pnpm install --frozen-lockfile --prod
RUN pnpx nx run api:build:production
USER node
FROM node:18 As build
WORKDIR /app
COPY --chown=node:node package.json pnpm-lock.yaml ./
COPY --chown=node:node prisma ./prisma/
COPY --chown=node:node --from=development /app/dist/apps/api .
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
ENV NODE_ENV production
RUN pnpm install --prod --frozen-lockfile
USER node
FROM node:18-alpine As production
WORKDIR /app
COPY --chown=node:node --from=build /app .
RUN apk add --update --no-cache openssl1.1-compat curl
RUN npx prisma generate
RUN curl -sf https://gobinaries.com/tj/node-prune | sh
RUN /usr/local/bin/node-prune
ENV PORT=3333
EXPOSE ${PORT}
CMD [ "node", "main.js" ]
There is build log
Build Log
For more information about code please visit my repo.
*
*try build without using stage build and it work perfect but I need to use multistage to reduce image file size
*(not working) try to change path from /app/dist/apps/api to ./app/dist/apps/api and app/dist/apps/api
*(not working) I've try this https://stackoverflow.com/a/71014279/10145023
A: It's correct; the file/folder (/app/dist/apps/api) doesn't exist.
I'm skeptical that you were able to build (the container using the Dockerfile) locally.
One way to debug this is to comment out everything except the first stage.
Then, run the container and use a shell to inspect it:
IMG="75524662"
TAG=$(git rev-parse HEAD)
podman build \
--tag=${IMG}:${TAG} \
--file=./Dockerfile \
${PWD}
podman run \
--interactive --tty --rm \
localhost/${IMG}:${TAG} \
bash
Then:
node@3de51ec71978:/app$ ls -la
total 484
drwxr-xr-x 9 node node 4096 Feb 21 19:59 .
dr-xr-xr-x 22 root root 4096 Feb 21 20:02 ..
-rw-rw-r-- 1 node node 245 Feb 21 19:58 .editorconfig
-rw-rw-r-- 1 node node 753 Feb 21 19:58 .eslintrc.json
drwxrwxr-x 3 node node 4096 Feb 21 19:58 .github
-rw-rw-r-- 1 node node 74 Feb 21 19:58 .prettierignore
-rw-rw-r-- 1 node node 26 Feb 21 19:58 .prettierrc
drwxrwxr-x 4 node node 4096 Feb 21 19:58 apps
-rw-rw-r-- 1 node node 28 Feb 21 19:58 babel.config.json
drwxrwxr-x 2 node node 4096 Feb 21 19:58 docker
-rw-rw-r-- 1 node node 98 Feb 21 19:58 jest.config.ts
-rw-rw-r-- 1 node node 90 Feb 21 19:58 jest.preset.js
drwxrwxr-x 2 node node 4096 Feb 21 19:58 libs
drwxr-xr-x 11 root root 4096 Feb 21 20:00 node_modules
-rw-rw-r-- 1 node node 1509 Feb 21 19:58 nx.json
-rw-rw-r-- 1 node node 2516 Feb 21 19:58 package.json
-rw-rw-r-- 1 node node 416349 Feb 21 19:58 pnpm-lock.yaml
drwxrwxr-x 4 node node 4096 Feb 21 19:58 prisma
drwxrwxr-x 3 node node 4096 Feb 21 19:58 tools
-rw-rw-r-- 1 node node 467 Feb 21 19:58 tsconfig.base.json
node@3de51ec71978:/app$ ls apps
api web
node@3de51ec71978:/app$ ls /app/apps/api
Dockerfile jest.config.ts project.json src tsconfig.app.json tsconfig.json tsconfig.spec.json webpack.config.js
So, you probably want:
COPY --chown=node:node --from=development /app/apps/api .
| |
doc_23537655 | The table essentially looks like this:
ingredient_name | ingredient_method | consolidated_name
Cheese | [camembert, pkg] |
Cheese | [cream, pastueri] |
Egg | [raw, scrambled] |
I'm trying to iterate through the rows and fill the consolidated_name column with values from either ingredient_name or ingredient_method.
For example, if ingredient_name is "Cheese" I want that row's consolidated name to be the first element of the list in ingredient_method.
This is the code I have so far:
for i, row in df.iterrows():
consolidated = df['ingredient_name']
if (df['ingredient_name'] == 'Cheese').all():
consolidated = df['ingredient_method'][0]
df.set_value(i,'consolidated_name',consolidated)
The code runs without errors but none of the values change in the dataframe.
Any ideas?
A: One could use .loc (combined to .str[0])
With:
df = pd.DataFrame(dict(ingredient_name=['Cheese','Cheese','Egg'],
ingredient_method=[['camembert', 'pkg'],
['cream', 'pastueri'],
['raw', 'scrambled']]))
Do:
#Initialize consolidated_name with None for instance
df['consolidated_name'] = [None]*len(df) #Not mandatory, will fill with NaN if not set
#Use .loc to get the rows you want and .str[0] to get the first elements
_filter = df.ingredient_name=='Cheese' #Filter you want to
df.loc[_filter,'consolidated_name'] = df.loc[_filter,'ingredient_method'].str[0]
Result:
print(df)
ingredient_method ingredient_name consolidated_name
0 [camembert, pkg] Cheese camembert
1 [cream, pastueri] Cheese cream
2 [raw, scrambled] Egg None
Note
#1
If you want to consolidate all the duplicated ingredients you can filter with the following:
_duplicated = df.ingredient_name[df.ingredient_name.duplicated()]
_filter = df.ingredient_name.isin(_duplicated)
The use of .loc is unchanged see next example:
df = pd.DataFrame(dict(ingredient_name=['Cheese','Cheese','Egg','Foo','Foo'],
ingredient_method=[['camembert', 'pkg'],
['cream', 'pastueri'],
['raw', 'scrambled'],
['bar', 'taz'],
['taz', 'bar']]))
_duplicated = df.ingredient_name[df.ingredient_name.duplicated()]
_filter = df.ingredient_name.isin(_duplicated)
df.loc[_filter,'consolidated_name'] = df.loc[_filter,'ingredient_method'].str[0]
print(df)
ingredient_method ingredient_name consolidated_name
0 [camembert, pkg] Cheese camembert
1 [cream, pastueri] Cheese cream
2 [raw, scrambled] Egg NaN
3 [bar, taz] Foo bar
4 [taz, bar] Foo taz
#2
If you want you can initialize with ingredient_name:
df['consolidated_name'] = df.ingredient_name
Then do your stuff:
_duplicated = df.ingredient_name[df.ingredient_name.duplicated()]
_filter = df.ingredient_name.isin(_duplicated)
df.loc[_filter,'consolidated_name'] = df.loc[_filter,'ingredient_method'].str[0]
print(df)
ingredient_method ingredient_name consolidated_name
0 [camembert, pkg] Cheese camembert
1 [cream, pastueri] Cheese cream
2 [raw, scrambled] Egg Egg #Here it has changed
3 [bar, taz] Foo bar
4 [taz, bar] Foo taz
A: You can use DataFrame.apply for that purpose. Simply wrap your decision logic (which is now in the for loop) into a corresponding function.
def func(row):
if row['ingredient_name'] == 'Cheese':
return row['ingredient_method'][0]
return None
df['consolidated_name'] = df.apply(func, axis=1)
A: If you want do it using your initial loop.
consolidated_name = []
for i,row in df.iterrows():
if row[0] =='Cheese':
consolidated_name.append(row[1][0])
else: consolidated_name.append(None)
df['consolidated_name']=consolidated_name
## out:
ingredient_name ingredient_method consolidated_name
0 Cheese [camembert, pkg] camembert
1 Cheese [cream, pastueri] cream
2 Egg [raw, scrambled] None
| |
doc_23537656 | Stored procedure performs insert operation and returns values of one of the columns as output parameter.
I need to execute it from c# and get back value of output param.
Below is what i have tried so far.
SqlParameter[] associateParams = new SqlParameter[10];
{
associateParams[0]=new SqlParameter("@orgName", newAssociate.OrgName);
associateParams[1]=new SqlParameter("@createdBy", newAssociate.Email);
associateParams[2]=new SqlParameter("@userName", newAssociate.UserName);
associateParams[3]=new SqlParameter("@workEmail", newAssociate.Email);
associateParams[4]=new SqlParameter("@password", newAssociate.Password);
associateParams[5]=new SqlParameter("@teamStrength", "0");
associateParams[6]=new SqlParameter("@projName", newAssociate.ProjName);
associateParams[7]=new SqlParameter("@userType", "Associate");
associateParams[8] = new SqlParameter("@userSalt", SqlDbType.VarChar, 400);
associateParams[8].Direction = ParameterDirection.Output;
associateParams[9] = new SqlParameter("@activationKey", SqlDbType.Int);
associateParams[9].Direction = ParameterDirection.Output;
}
using (SqlCommand cmd = con.CreateCommand())
{
log.Debug("In command is called");
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = ProcedureName;
cmd.Parameters.AddRange(param);
log.Debug("Command is called");
try
{
if (con.State != ConnectionState.Open)
{
con.Open();
log.Debug("Con is open");
}
cmd.ExecuteScalar();
log.Debug(cmd.Parameters["@userSalt"].Value.ToString());
log.Debug(cmd.Parameters["@activationKey"].Value.ToString());
Executing above, performs insert successfully but returns null for output params values.
Can anyone suggest what I am missing here.
Thanks
A: try like this when you define output parameters:
associateParams[8] = new SqlParameter("@userSalt", SqlDbType.VarChar, 400);
associateParams[8].Value = "";
associateParams[8].Direction = ParameterDirection.Output;
associateParams[9] = new SqlParameter("@activationKey", SqlDbType.Int);
associateParams[9].Value = 0;
associateParams[9].Direction = ParameterDirection.Output;
let me know if this helps.
UPDATE: here is my own method
cmd.Parameters.Add(new SqlParameter("@userSalt", SqlDbType.VarChar, 400));
cmd.Parameters["@userSalt"].Value = "";
cmd.Parameters["@userSalt"].Direction = ParameterDirection.Output;
UPDATE1: because you dont use ExecuteNonQuery. Change cmd.ExecuteScalar() to cmd.ExecuteNonQuery()
| |
doc_23537657 |
s2ui:submitButton elementId='reset' form='resetPasswordForm'
messageCode='spring.security.ui.resetPassword.submit(msg)'/>
which gives me Msg+ a button with no value.
I wan that the button should have the msg in its value.
I tried even giving the explicit value field but it does not regard it.
How do I edit the button's value then.
+
How to align the s2ui form.
the problem was in the taglib
def submitButton = { attrs ->
String form =getRequiredAttribute(attrs, 'form', 'submitButton')
String elementId = getRequiredAttribute(attrs, 'elementId', 'submitButton') >
String text = resolveText(attrs)
def writer = getOut()
// writer << """
// writer << ">${text}\n"
writer << "\n"
String javascript = """\$("#${elementId}").button();
\$('#${elementId}').bind('click', function() {
document.forms.${form}.submit();});"""
writeDocumentReady writer, javascript
}
So this used to put the value as null and show the test as a link with a blank button
A: I got the problem was actually in the tag lib..
But as I could not commit my local changes of plugin to server it did not help. So may be we should edit the plugin at server too where we host our application or create own own taglib instead of using the plugin's.
So currently I am using my own submit button.
| |
doc_23537658 | - (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
image.center = CGPointMake(acceleration.x, acceleration.y);
}
When i test the app, the image that is supposed to move around just sits in the x0 y0 position.
I declared the accelerometer, called the .h UIAccelerometerDelegate and so on...
What am i doing wrong?
Thanks in advance! -DD
A: You do realize that the accelerometer returns, as the name would suggest, measures of acceleration not points on the display? Anyway, what you need to do, is alter the center (not replace it completely), which will allow you to move the image.
Something along these lines:
image.center = CGPointMake(image.center.x + acceleration.x,
image.center.y - acceleration.y);
It is also important to note that the acceleration usually stays between -1 and 1 (unless the user shakes the device), which is due to the gravity being 1G. Therefore you should probably multiply the acceleration.x and .y values with some constant to make the image move a bit faster than about 1 point at a time.
There are additional things you should think about, what if the image is at the edge of the screen? What if the user wants to use the app in some other position than flat on a surface (needs calibration of the accelerometer)?
A: -(void)moveImage:(id)sender
{
[operationView bringSubviewToFront:[(UIPanGestureRecognizer*)sender view]];
[[[(UIPanGestureRecognizer*)sender view] layer] removeAllAnimations];
CGPoint translatedPoint = [(UIPanGestureRecognizer*)sender translationInView:self.view];
if([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateBegan)
{
firstX = [[sender view] center].x;
firstY = [[sender view] center].y;
[imgDeleteView setHidden:FALSE];
}
else if ([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateEnded)
{
[imgDeleteView setHidden:TRUE];
}
translatedPoint = CGPointMake(firstX+translatedPoint.x, firstY+translatedPoint.y);
[[(UIPanGestureRecognizer *)sender view] setCenter:translatedPoint];
}
| |
doc_23537659 | The output I'm getting is: 1 + 2 + 3 + 4 + 5 + = 15 (with an extra + side on the end). I'm not sure how to get it to output without the extra + at the end, and am clearly not searching for the right terms to figure it out. Thanks!
Here's my code:
function exercise7Part2() {
// PART 2: YOUR CODE STARTS AFTER THIS LINE
// Declare variables
var loopStart;
var loopMax;
var total;
// Assignments
loopStart = Number(prompt("Enter a number:"));
loopMax = Number(prompt("Enter a number larger than the last:"));
total = 0;
// Processing
while (loopStart <= loopMax)
{
total += loopStart;
document.write(loopStart + " + ");
loopStart++;
}
document.write(" = " + total);
}
A: It's because you're printing loopState + "+" which will always print the + at the end. Instead you must check if it's the last value and prevent the + from printing or else, use a ternary operator to print it.
In this example, I'm checking if both loopStart and loopMax are not equal. if they're not equal then am appending + at the end.
It will be like:
document.write(loopStart+ (loopStart!=loopMax ? "+" : ""));
Here (loopStart!=loopMax ? "+" : "") is a ternary operator. The loopStart!=loopMax is an boolean expression. It's evaluated and if it's true the first parameter after ? will be used so in this case + and if its false anythign after : will be used so in this case its "" empty string.
// Declare variables
var loopStart;
var loopMax;
var total;
// Assignments
loopStart = Number(prompt("Enter a number:"));
loopMax = Number(prompt("Enter a number larger than the last:"));
total = 0;
// Processing
while (loopStart <= loopMax)
{
total += loopStart;
document.write(loopStart+ (loopStart!=loopMax ? "+" : ""));
loopStart++;
}
document.write(" = " + total);
With normal if condition block
while (loopStart <= loopMax)
{
total += loopStart;
if(loopStart===loopMax) {
document.write(loopStart);
} else {
document.write(loopStart+ "+");
}
loopStart++;
}
A:
// Declare variables
var loopStart;
var loopMax;
var total;
// Assignments
loopStart = Number(prompt("Enter a number:"));
loopMax = Number(prompt("Enter a number larger than the last:"));
total = 0;
// Processing
while (loopStart <= loopMax)
{
total += loopStart;
document.write(loopStart+ (loopStart!=loopMax ? "+" : ""));
loopStart++;
}
document.write(" = " + total);
| |
doc_23537660 |
*
*Maven home path
*Local repository (override checked) and
*VM options for importer
keep getting changed in regular intervals without me doing anything. Basically I have to reset those settings once a day, have you guys had any similiar problems?
(IntelliJ IDEA 2021.1.2 (Ultimate Edition), Maven 3)
| |
doc_23537661 | Added simplified repo link to bottom of post to reproduce issue.
I have got a JSON file which builds up my website, here is a snippet of it:
[
{
"id": "home",
"path": [
""
],
"name": "Home",
"showInNav": true,
"isDynamic": false,
"components": [
{
"id": "promotion-1",
"name": "PromotionImage",
"theme": "default",
"props": {
"image": {
"path": "",
"alt": "Alt text here",
"position": "right"
},
"classes": {
"wrapper": {
"classes": "bg-zinc-900"
}
}
}
}
]
}
]
This gets build up like so:
// [...path].tsx
import getPagePaths from '../services/getPagePaths';
import getPageProps from '../services/getPageProps'; // Import the above JSON file here.
import Looper from '@lib/core/components/Looper';
export async function getStaticPaths() {
const paths = await getPagePaths();
return {
paths: paths.map((path) => ({ params: { path } })),
fallback: false,
};
}
export async function getStaticProps({ params }) {
const page = await getPageProps(params.path);
return { props: { ...page, components: page.components || [] } };
}
const Page = ({ components }): JSX.Element => {
return <Looper components={components} />;
};
export default Page;
// Looper.tsx
const Looper = ({ components }): JSX.Element => {
if (!components || !components.length) return null;
return components.map((component) => {
const Component = require(`@lib/themes/${component.theme}/components/${component.name}`).default;
return !Component ? null : <Component key={component.id} {...component.props} />;
});
};
export default Looper;
// PromotionImage.tsx
const PromotionImage = ({ classes, image, components }: PromotionProps): JSX.Element => {
const wrapperClasses = ['px-0 sm:px-6 lg:px-8 pt-0 sm:pt-[60px] py-[40px] sm:py-[60px] md:py-[80px]', classes?.wrapper || ''].join(' ').trim();
return (
<section className={wrapperClasses}>
...
So it's simple, we import the json file, loop through the items and require the component's with the path we build up, then check to see if there is any classes in this JSON object and merge them.
So for example the bg-zinc-900 class should be getting applied to the wrapper. However this is not the case, but if I briefly put that class within the component itself and then remove it, the class gets applied.
It feels like when ever tailwind is treeshaken this class isn't being seen if imported from the JSON, any ideas?
I have added the component to the config too, so tailwind will see the component.
// tailwind.config.js
module.exports = {
content: ['./lib/**/*.{js,ts,jsx,tsx}', './pages/**/*.{js,ts,jsx,tsx}', './components/**/*.{js,ts,jsx,tsx}'],
};
Edit:
I have created a simplified project to show the issue here.
Steps to reproduce:
*
*Run next dev.
*You will see Testing on the page, with a red colour. Within data/classes.json you can see there is also bg-zinc-900.
*Add that class to the first array item in wrapper on index.js, background will be applied, remove it, will still be there.
| |
doc_23537662 | I tried this code and they show the form2 and they print the whole database in my datagridview, I want only the selected row. Can anyone help me give me a codes or syntax and explain it to me. I badly need it. I really appreciated your help. Thank you very much.
If e.RowIndex >= 0 Then selectedrow=DataGridView1.Rows(e.RowIndex)
Form2.ShowDialog()
End If
| |
doc_23537663 |
*
*Calling iterator.hasNext changes the value of iterator.size.
*iterator.hasNext == false even on a non-empty iterator before any iteration has occurred.
Any ideas on what may cause these problems?
val list = scala.collection.immutable.List(1, 2, 3)
val iterator = list.iterator
println(iterator.size) // 3
println(iterator.hasNext) // false
println(iterator.size) // 0
Confirmed locally on Scala version 2.11.4 (OpenJDK 64-Bit Server VM, Java 1.8.0_72).
Confirmed on IdeOne here.
A: Computing the size of an iterator consumes it as the size is not stored. I think that it is working like intended, even though the API (i.e. offering size for an iterator) is misleading.
| |
doc_23537664 | I am working on Sandbox environment right now and everything going well. But when my binary will approve, My binary will be in realtime payment statement and Sandbox will no longer exist. In my service i can switch url's wheter is it sandbox or not.
But when my app going to be in InReview Statement , Review team will also test in sandbox ? What if they will put me in real time payment status and get test my app like that. They will get error because of my service pushes the data to the sandbox environment ...
Is there anyway to get understand that my binary is in Sandbox statement or not with programmatically?
A: The review process tests against the sandbox. You should submit your app with the provision of a developer hold so you can then switch your servers to production before you release the app.
A: Apple's In-App Purchase Programming Guide describes the environments that are used during development, review and production along with the suggested approach from App Developers
As you can see from the diagram, during review purchases are made against Apple's test (sandbox) server, but since the binary that is being reviewed is the binary that will be released to the store, that binary must be configured to use your production server (or production URL if you have only a single server).
In order to ensure that receipts are still validated correctly, the guide advises:
When validating receipts on your server, your server needs to be able to handle a production-signed app getting its receipts from Apple’s test environment. The recommended approach is for your production server to always validate receipts against the production App Store first. If validation fails with the error code “Sandbox receipt used in production”, validate against the test environment instead
A: I think you can't test a real purchase (with real money)
If your purchase works against the sandbox, it should work against prod as well.
IAP Documents
| |
doc_23537665 | class baseactivity extends Activity
{
//some stuffs is here, so i cannot Extend Fragment
}
class activity extends baseactivity
{
GoogleMap map=((MapFragment) getFragmanentManager().findFragmentById(R.id.map)).getMap();
if(map!=null)
Log.e("map is not null","map was unable to initialize");//printing till kitkat version
else
Log.e("map is initialize","successfully initialize");//printing in lollipop
}
It's working fine till kitkat but it's null in lollipop version I know that I can use getchildfragmentmanager but I cannot extend fragment class cause my project is almost completed so I cannot make big changes.
Help me without extending fragment class.
Updated code which also not working for me
class Baseactivity extends Activty
{
//some stuff
}
class parent extends Baseactivity implements OnMapLoadedCallback
{
GoogleMap map;
void oncreate(Bundle savedIntanceState)
{
MapFragment mapfragment;
mapfragment.setOnMapLoaderCallback(this);
//map.addmarker.
}
@Override
public void onMapLoaded()
{
map=((MapFragment)getMapFragment.findFragmentById(R.id.map)).getMap();
}
}
A: Per the getMap() documentation:
This method is deprecated. Use getMapAsync(OnMapReadyCallback) instead. The callback method provides you with a GoogleMap instance guaranteed to be non-null and ready to be used.
getMap() is not guaranteed to return a non-null map as it takes some time to prepare - use an OnMapReadyCallback and do your map initialization steps in the callback there.
| |
doc_23537666 | The problem I'm having is one I've solved before by using a formatfunction (similar to printf in many languages), but this was in another language.
The problem specifically is that text like User performed action may be come Action was performed by user in another language (i.e. the terms may become out of order).
In the past, I've done something like #translate("Welcome to the site, %s!", {"Username"}), and then used the language's format function to replace %s with the username. I could simply use String#replace but then I can't do something like #translate("Welcome to the site, %s! You last visited on %s!", {"username", "last visit"}) like I'd like to.
Sorry if this is a bad explanation—just look up printf in something like PHP.
What would be the best way to replicate something like this in Java? Thanks for the help.
A: Don't reinvent. Use JSTL fmt taglib. It supports parameterized messages as well.
<fmt:message key="identifier">
<fmt:param value="${username}" />
</fmt:message>
See also:
*
*How to internationalize a Java web application? - a mini tutorial
A: I've stuck myself in that question and I find out that the best way is to use resource bundle like everyone (or almost every one) does. You can use the fmt taglib or the spring message.
I tried to use the gettext solution, but it includes some previous steps (xgettext, msgmerge, msgfmt) which makes this too complex and it is not so good for webapp (in my opinion).
I'm going to use the spring message, you can see an example on:
http://viralpatel.net/blogs/2010/07/spring-3-mvc-internationalization-i18n-localization-tutorial-example.html
A: use property files to have different languages
en_US.properties
fr_CA.properties
and in those properties file, have your text like that
user.performed.action=User performed an action
and then as BalusC said, use JSTL.
| |
doc_23537667 | Can someone assist me in reconnecting the websocket?
I am listening for an onClose event and calling the method that creates the websocket. The error I get once I refresh the page is:
Socket encountered error: write after end Closing socket
/home/pi/data_collector_access_point/node_modules/ws/lib/websocket.js:829
websocket.readyState = WebSocket.CLOSING;
^
TypeError: Cannot set property 'readyState' of undefined
at Socket.socketOnClose (/home/pi/data_collector_access_point/node_modules/ws/lib/websocket.js:829:24)
at emitOne (events.js:121:20)
at Socket.emit (events.js:211:7)
at TCP._handle.close [as _onclose] (net.js:567:12)
Server.js
serverWS(address){
let server = address
const WebSocket = require('ws');
let wss = new WebSocket.Server({ server });
var socketstate = 1;
wss.on('connection', function connection(ws, req) {
ws.on('message', function incoming(message) {
});
ws.onclose = function(){
socketstate = 0
console.log('ws.onclose()')
setTimeout(function(){
console.log('reconnct ', server)
let fti_server = new create_server()
fti_server.serverWS(server)
},3000);
};
ws.onerror = function(err) {
console.error('Socket encountered error: ', err.message, 'Closing socket');
ws.close();
};
function sendme(data){
switch(socketstate){
case 0:
console.log('sendme() socket state', socketstate)
break;
case 1:
console.log('sendme() socket state', socketstate)
if(ws.readyState == 1){
updater((data)=>{
ws.send(data)
})
}
break;
}
}
function reOpen(){
console.log('reopen', wss)
wss = new WebSocket.Server({ server });
}
setInterval(()=>{
switch(ws.readyState){
case 1 :
console.log('switch case ws state on')
sendme(data)
break;
case 2 :
console.log('ws state 2')
break
case 3 :
console.log('switch case ws state not on')
break;
}
},1000);
})
}
I am listening to the websocketReadystate to see if the websocket is open, closed, closing etc. Once opened I would continue sending the index data.
I am referencing this reconnection situation .
A: For those interested I got it working by allowing the browser to establish a connection without calling the websocket setup function again. I modified the onclose() and it works!
ws.onclose = function(){
socketstate = 0
ws.close()
};
| |
doc_23537668 | I am referring this documentation link for usage instructions.
Request URL: GET https://outlook.office.com/api/v2.0/me/Events/$count
Response: -1
To verify if the above mentioned response is legitimate, I tried to get all events with a skip filter to identify the actual number of records present.
After a certain attempts following request URL gave me final count:
Request URL: GET https://outlook.office.com/api/v2.0/me/events/?$skip=159
Response:
{
"@odata.context": "https://outlook.office.com/api/v2.0/$metadata#Me/Events",
"value": [
{
"Id": "AAMkADMzYzIxNTBjLWUyMWUtNDgzYi04NTEwLTc5YTkzMWI5MmE4MgBGAAAAAABjOnbtK9ZkTIjwInd5bfYcBwDe_ScfFfAUQaKHRwnzV1lBAAAAAAENAADe_ScfFfAUQaKHRwnzV1lBAACs2ojfAAA=",
"CreatedDateTime": "2016-11-28T11:09:03.8359488Z",
"LastModifiedDateTime": "2017-02-21T08:04:48.8262751Z"
}
]
}
This implies that after skipping 159 records, I've 160th record present in the authenticated account but $count filtered API doesn't give me a valid response.
I tried testing this scenario with two different accounts where I noticed this anomaly for /message API as well. The HTTP GET call to messages/$count gives me 1563 whereas after trying with skip filter I found the total count to be 1396
I want to know if $count returns a legitimate response? If yes then explain this anomaly If no then is there any pattern where we should expect response to inconsistent?
A: To get a count of the number of events, you need to specify start time and end time. Here is what I use:
https://outlook.office.com/api/v2.0/me/calendarview/$count?startDateTime=2017-05-01T00:00:00Z&endDateTime=2017-05-31T23:59:59Z
If you don't specify the dates, you will get 400 with the following error message:
{"error":{"code":"ErrorInvalidParameter","message":"This request requires a time window specified by the query string parameters StartDateTime and EndDateTime."}}
| |
doc_23537669 | public function setCookies($value){
$minutes = 60;
$response = new Response('Hello World');
$response->withCookie(cookie('name', $value, $minutes));
return $response;
}
where $value is the string value of the cookie
and i am trying to get the cookie with this method
public function getCookies(Request $request) {
$value = $request->cookie('name');
return $value;
}
but the return value is always null. please let me know where i am going wrong
here are my routes
Route::get('/cookie/set','App\Http\Controllers\Cont@setCookies');
Route::get('/cookie/get','App\Http\Controllers\cont@getCookies');
A: you have to change code to this :
public function setCookie(Request $request){
$minutes = 60;
$response = new Response('Set Cookie');
$response->withCookie(cookie('name', 'MyValue', $minutes));
return $response;
}
A: You need to change the route to accept dynamic value.
If you're sending values from url, then set the route to
Route::get('/cookie/set/{value}','App\Http\Controllers\Cont@setCookies');
| |
doc_23537670 | The query and foreach are fine and they work. I use that exact setup in a command. It's just not working for the embed.
const con = mysql.createConnection({
host: "localhost",
user: "root",
password: "",
database: "testbot"
});
con.connect(err => {
if(err) throw err;
console.log("Connected to database!");
});
function statusUpdate() {
var update = bot.channels.get('5777623821454355545');
const statusEmbed = new Discord.RichEmbed();
statusEmbed.setTitle("**Current Statuss:**");
con.query("SELECT * FROM games", function(err, result, fields) {
if(err) throw err;
Object.keys(result).forEach(function(key) {
var row = result[key];
statusEmbed.addField('**' + row.name + '**' + ' - ' + '(' + row.description + ')' + ' - ' + '**' + row.status + '**');
});
});
update.send(statusEmbed);
}
bot.on('ready', () => {
console.log('This bot is online!');
statusUpdate();
});
A: You have to make the update of statusEmbed inside of the query callback, because if you don't you will update it when the addField will not be performed yet.
callback means that the query is asynchronous.
Callback based soluce
function statusUpdate(callback) {
const update = bot.channels.get('577762382164525066');
const statusEmbed = new Discord.RichEmbed();
statusEmbed.setTitle('**Current Statuss:**');
con.query('SELECT * FROM games', function(err, result, fields) {
if (err) {
callback(err);
return;
}
Object.keys(result).forEach((key) => {
const row = result[key];
statusEmbed.addField(`**${row.name}** - (${row.description}) - **${row.status}**`);
});
update.send(statusEmbed);
callback(false);
});
}
Alternative
function statusUpdate() {
const update = bot.channels.get('577762382164525066');
const statusEmbed = new Discord.RichEmbed();
statusEmbed.setTitle('**Current Statuss:**');
con.query('SELECT * FROM games', function(err, result, fields) {
if (err) {
throw err;
}
Object.keys(result).forEach((key) => {
const row = result[key];
statusEmbed.addField(`**${row.name}** - (${row.description}) - **${row.status}**`);
});
update.send(statusEmbed);
});
}
| |
doc_23537671 | I have hortonworks distribution installed on my machine.
I am following steps in the below article.
http://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/#section_5
File does not exist:
/user/admin/pig/jobs/explain_p1_28-03-2016-00-58-46/stderr at
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
at
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1828)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1712)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:652)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
| |
doc_23537672 | My html is:
<div class="product-box" >
<div class="flex-container">
<div class="flex-child">
<input class="qty-box">
</div>
<div class="flex-child">
<button type="button" class="qty-add-sub">
<span class="qty-add">+</span>
</button>
</div>
<div class="flex-child">
<button type="button" class="qty-add-sub">
<span class="qty-minus">-</span>
</button>
</div>
</div>
</div>
My css is:
.flex-container {
display: flex;
max-width: 180px;
}
.flex-child {
flex: 1;
}
.product-box {
display: flex;
flex-direction: column;
font-weight: 300;
width: 200px;
padding: 10px;
text-align: center;
background-color: #c5c5c5;
border-radius: 5px;
}
input.qty-box {
border: 2px solid rgb(179, 179, 179);
font-family: Arial, Helvetica, sans-serif;
border-radius: 5px;
font-weight: bold;
font-size: 22px;
height: 55px;
}
.qty-add {
color:rgb(84, 0, 0);
font-size: 30px;
}
.qty-minus {
color:rgb(84, 0, 0);
font-size: 30px;
}
The display appears like so:
Sandbox URL:
https://codesandbox.io/embed/html-css-forked-eu1680?fontsize=14&hidenavigation=1&theme=dark
A: The input default size is 20 characters, and there is not
enough space for that
input w3 schools
If you want to resize the input to less characters you can use
<input size="number">
A: So I've had a good look at this and there's a few thing to note:
Setting the width of the input box to 100% works but there's still an element that pokes out of the end, this is due to the box-sizing being content-box and not border-box. I've set the flex box so that the first child tries to grow and the buttons don't. I've also set a width for your buttons too. Example below
* {
box-sizing: border-box;
}
.flex-container {
display: flex;
gap:0.125rem;
}
.flex-child {
flex-grow: 0;
}
.flex-child:first-child {
flex-grow: 1;
}
.product-box {
font-weight: 300;
width: 200px;
padding: 10px;
background-color: #c5c5c5;
border-radius: 5px;
}
input.qty-box {
border: 2px solid rgb(179, 179, 179);
font-family: Arial, Helvetica, sans-serif;
border-radius: 5px;
font-weight: bold;
font-size: 22px;
height: 55px;
width: 100%;
}
.qty-add {
color: rgb(84, 0, 0);
font-size: 30px;
}
.qty-minus {
color: rgb(84, 0, 0);
font-size: 30px;
}
.qty-add-sub {
width: 2rem;
}
<div class="product-box">
<div class="flex-container">
<div class="flex-child">
<input class="qty-box" />
</div>
<div class="flex-child">
<button type="button" class="qty-add-sub">
<span class="qty-add">+</span>
</button>
</div>
<div class="flex-child">
<button type="button" class="qty-add-sub">
<span class="qty-minus">-</span>
</button>
</div>
</div>
</div>
| |
doc_23537673 | I'm trying to set the rate limiter per second for each API key with the following configuration and I did the load testing with 400 parallel requests with 125 iterations in JMeter.
Output not returned as per the rate limiter set in the configuration
Note: I have set the rate limiter per minute for each API ley as similar to the following configuration like "rate=3r/m". It worked
But with the same configuration why it is not working requests per second is my worry.
Can someone help me out to find the reason for the requests per second rate limiter not working?
Thanks in advance
# Rate Limiter By Input API Key Header
limit_req_zone $http_api_key zone=api_key_header_rate_limit:10m rate=400r/s;
# API client Rate limiter - START
map $http_api_key $r1 {
key1 "1";
default "";
}
limit_req_zone $r1 zone=r1:10m rate=40r/s;
map $http_api_key $r2 {
key2 "1";
default "";
}
limit_req_zone $r2 zone=r2:10m rate=50r/s;
map $http_api_key $r3 {
key3 "1";
default "";
}
limit_req_zone $r3 zone=r3:20m rate=30r/s;
# API client Rate limiter - END
server {
listen 443 ssl;
# listen [::]:443 ssl http2;
server_name example.com;
root /var/www/html;
# Brotli Settings
brotli on;
brotli_comp_level 5;
brotli_buffers 32 8k;
brotli_min_length 100;
brotli_static on;
brotli_types image/jpeg image/bmp image/svg+xml text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon;
# Rate Limiter Zone
# limit_req zone=r1 burst=39 nodelay;
# limit_req zone=r2 burst=49 nodelay;
limit_req zone=r3 burst=29 nodelay;
limit_req zone=api_key_header_rate_limit burst=399 nodelay;
limit_req_status 429;
# SSL
# listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# ssl_protocols TLSv1.2;
# ssl_session_timeout 4h;
# ssl_handshake_timeout 30s;
ssl_client_certificate /etc/nginx/ssl/certificate/api_ca.crt;
ssl_verify_client optional; # Set to "on" if you only allow authenticated requests
location ~ ^/(status|ping)$ {
allow 127.0.0.1;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
}
location ^~ /main {
alias /var/www/html/main/public;
try_files $uri $uri/ @main_laravel;
index index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
include fastcgi_params;
fastcgi_read_timeout 600;
fastcgi_param SCRIPT_FILENAME /var/www/html/main/public/index.php;
fastcgi_param SSL_CLIENT_VERIFY $ssl_client_verify;
fastcgi_param SSL_CLIENT_S_DN $ssl_client_s_dn;
}
}
location @main_laravel {
rewrite /main/(.*)$ /main/index.php?/$1 last;
}
location / {
# index index.html;
try_files $uri$args $uri$args/ /index.html;
}
}
| |
doc_23537674 | Thanks!
A: I don't know how to help you with vpasolve, but you can try fsolve or fzero function. It is possible to pass tolerance preferences via optimoptions function there.
http://www.mathworks.com/help/optim/ug/fzero.html
http://www.mathworks.com/help/optim/ug/optimoptions.html
Cheers.
| |
doc_23537675 |
Here is some code to genereate data in this form
start_interval <- seq(0, 13)
end_interval <- seq(1, 14)
living_at_start <- round(seq(1000, 0, length.out = 14))
dead_in_interval <- c(abs(diff(living_at_start)), 0)
df <- data.frame(start_interval, end_interval, living_at_start, dead_in_interval)
From my use of the survival package so far it seems to have each individual be a survival time but I might be misreading the documentation of the Surv function. If survival will not work what other packages are out there for this type of data.
If there is not a package or function to easily to estimate the survival function I can easily calculate the survival times myself with the following equation.
A: Since the survival package need one observation per survival time we need to do some transformations. Using the simulated data.
Simulated Data:
library(survival)
start_interval <- seq(0, 13)
end_interval <- seq(1, 14)
living_at_start <- round(seq(1000, 0, length.out = 14))
dead_in_interval <- c(abs(diff(living_at_start)), 0)
df <- data.frame(start_interval, end_interval, living_at_start, dead_in_interval)
Transforming the data by duplicated by the number dead
duptimes <- df$dead_in_interval
rid <- rep(1:nrow(df), duptimes)
df.t <- df[rid,]
Using the Surv Function
test <- Surv(time = df.t$start_interval,
time2 = df.t$end_interval,
event = rep(1, nrow(df.t)), #Every Observation is a death
type = "interval")
Fitting the survival curve
summary(survfit(test ~ 1))
Comparing with by hand calculation from original data
df$living_at_start/max(df$living_at_start)
They match.
Questions
When using the survfit function why is number of risk 1001 at time 0 when there is only 1000 people in the data?
length(test)
| |
doc_23537676 | What I want
This is my current JSFiddle showing what I've "accomplished"
I am fairly new to asp.net and programming in general so excuse my poor CSS.
Thanks for any help that anyone can offer.
The HTML:
<div class="bigGreenButton"> <a href="/Liquor/specialorder/supplier-info">Submit a special order request <br />
for information ➧
</a> </div>
The CSS:
.bigGreenButton a{
font-family:'TradeGothic LT CondEighteen';
font-size:18px;
background-color:#60a74a;
color:white;
font-weight:bold;
padding-bottom:10px;
padding-top:10px;
padding-left:25px;
padding-right:25px;
text-transform:uppercase;
text-decoration:none;
height:auto;
width:auto;
text-align:center;
}
.bigGreenButton a:hover {
background-color:#cccccc;
}
button {
text-align: center;
padding: 0px 0px 0px 0px;
border: none;
}
A: Add this to your css:
.bigGreenButton a{
display: inline-block;
...
}
You can see it here.
A: Change display since there's not a block inside your link and set the width how you want it.
.bigGreenButton a{
...
display: block;
width: 400px;
}
Shown here
| |
doc_23537677 | I'm running a fairly low intensity test - 10 concurrency, 6 requests per user, throughput capped at 10 - and I'm encountering strange values in the CSV which break the plugins to generate graphs.
As you can see, the "success" column contains a strange "text" value, and all the following values are shifted by one. The plugins throw an exception since the "bytes" value is empty.
In the last run (10h30m long) there were 6 occurrences of these lines, in a CSV with about 330k lines.
I noticed that all the occurrences seem to be duplicates of the previous request - same endpoint and elapsed values. The hostname is "0" for some reason...
Suspecting some CSV delimiter issues caused by threadName values, I tried removeding the threadName column altogether from the Results Tree Config, but the new result is even stranger:
The threadName value has appeared from nowhere, in addition to "text".
Interestingly, Taurus' kpi.log doesn't contain these silly rows at all.
For instance, in relation to the second picture, the corresponding kpi contains:
1529955107932,9,Add quote to basket,200,OK,Create Enquiry through to select quote 1-2,true,162,10,10,9,e4ad73a192d4,0
1529955107932,10,Add quote to basket,200,OK,Create Enquiry through to select quote 1-5,true,162,10,10,10,e4ad73a192d4,0
1529955107932,10,Create new basket,201,Created,Create Enquiry through to select quote 1-9,true,224,10,10,10,e4ad73a192d4,1
1529955108268,4,Get basket,200,OK,Create Enquiry through to select quote 1-1,true,839,10,10,4,e4ad73a192d4,0
1529955108268,4,Get basket,200,OK,Create Enquiry through to select quote 1-3,true,1092,10,10,4,e4ad73a192d4,0
1529955108269,4,Get basket,200,OK,Create Enquiry through to select quote 1-10,true,1086,10,10,4,e4ad73a192d4,0
I ran the same 10.5h test again with the changes:
*
*Re-enabled threadName
*Enabled dataType
The graphs were not cut this time, but some contained dodgy data points: threadCount was reported at very high values, between 100 and 297, in 3 occasions.
Again, I found 3 dodgy lines corresponding to those values - this time, though, the data format is correct, so the plugins do not explode due to empty values.
These data points screw up some of the graphs...
Does anybody have a clue about it?
Thx
| |
doc_23537678 | <v-ons-list-item class="swipeArea"
v-for="(list, index) in todoLists"
v-bind:key="list.todo"
v-if="!list.completed"
v-touch:swipe.left="deleteLisItem(index)"
v-touch:swipe.right="doneLisItem(index)"
v-touch:longtap="deleteOrDone(index)" tappable>
<label class="center">
<i class="zmdi zmdi-check" aria-hidden="true"></i> {{ list.todo }}
</label>
</v-ons-list-item>
And then when user longtap a list there will be a pop with 3 button. Done, Delete and cancel. When user tap done, the item will be mark as done, if click delete, the item will be deleted and click cancel will cancel the event. Here is the div
<div v-if="doneOrDelete" id="doneOrDelete">
<div class="action-sheet-mask"></div>
<div class="action-sheet">
<button class="action-sheet-button" @click="doneLisItem">Done</button>
<button class="action-sheet-button action-sheet-button--destructive" @click="deleteLisItem">Delete</button>
<button class="action-sheet-button" @click="doneOrDelete=false">Cancel</button>
</div>
</div>
Now all i have to do is pass the index to the method. Can anyone help me how pass the index? TIA
| |
doc_23537679 | I am currently using a for each loop to iterate through each of the IDs fetched in a DataTable and then making a call to the Rest API URL and finally dumping the results into multiple tables.
The code runs well at times but sometimes randomly throws the below error :
The remote server returned an error: (401) Unauthorized.
I have tried various things but not able to crack it, below is my code for reference.
foreach (DataRow drow in ds.Tables[0].Rows)
{
string uri2 = "http://myrestURL/Transaction/" + drow.ItemArray[0].ToString();
HttpWebRequest req2 = HttpWebRequest.CreateHttp(uri2);
req2.CookieContainer = new CookieContainer();
req2.Method = "GET";
req2.UseDefaultCredentials = true;
req2.ContentLength = 0;
req2.Accept = "application/xml,*/*";
req2.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
using (HttpWebResponse resp2 = (HttpWebResponse)req2.GetResponse())
{
StringReader theReader;
DataSet theDataSet;
using (StreamReader reader2 = new StreamReader(resp2.GetResponseStream()))
{
string message2 = reader2.ReadToEnd();
theReader = new StringReader(message2.ToString());
theDataSet = new DataSet();
theDataSet.ReadXml(theReader);
DataTable dt = theDataSet.Tables["table1"];
DataTable dt1 = theDataSet.Tables["table2"];
DataTable dt2 = theDataSet.Tables["table3"];
DataTable dt3 = theDataSet.Tables["table4"];
dt.Columns.Add("ID", typeof(string));
dt.Columns["ID"].SetOrdinal(0);
foreach (DataRow row in dt.Rows)
{
row["ID"] = drow.ItemArray[0].ToString();
}
dt1.Columns.Add("ID", typeof(string));
dt1.Columns["ID"].SetOrdinal(0);
foreach (DataRow row in dt1.Rows)
{
row["ID"] = drow.ItemArray[0].ToString();
}
dt2.Columns.Add("ID", typeof(string));
dt2.Columns["ID"].SetOrdinal(0);
foreach (DataRow row in dt2.Rows)
{
row["ID"] = drow.ItemArray[0].ToString();
}
dt3.Columns.Add("ID", typeof(string));
dt3.Columns["ID"].SetOrdinal(0);
foreach (DataRow row in dt3.Rows)
{
row["ID"] = drow.ItemArray[0].ToString();
}
SqlConnection insertConn = new SqlConnection(strConn);
insertConn.Open();
using (SqlBulkCopy bulkCopy1 = new SqlBulkCopy(insertConn))
{
bulkCopy1.DestinationTableName = dt1.TableName;
try
{
bulkCopy1.WriteToServer(dt1);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
...
}
}
}
| |
doc_23537680 | The print() gives 'from initstate propertyName = Instance of 'Future' and not the actual value in Firestore. How do I extract the actual value?
I have tried also using StreamBuilder but keep having the same issue.
@override
void initState() {
super.initState();
var propertyName = _getPropertyNameFromPropertyID(
widget.propertyID); // to get propertyName from Firestore
print('from initstate propertyName = $propertyName');
}
Future _getPropertyNameFromPropertyID(propertyID) async {
DocumentSnapshot snapshot = await Firestore.instance
.collection('properties')
.document(propertyID)
.get();
String result = snapshot['propertyName'].toString();
return result;
}
A: That's because your method returns a Future so you will need to use async/await or just get the result directly from the Future.
Option 1
@override
void initState() {
super.initState();
_getPropertyNameFromPropertyID(widget.propertyID).then ((propertyName){
print('from initstate propertyName = $propertyName');
});
}
Option 2
@override
void initState() {
super.initState();
_loadAsyncData();
}
_loadAsyncData() async {
var propertyName = await _getPropertyNameFromPropertyID(
widget.propertyID); // to get propertyName from Firestore
print('from initstate propertyName = $propertyName');
}
Future _getPropertyNameFromPropertyID(propertyID) async {
DocumentSnapshot snapshot = await Firestore.instance
.collection('properties')
.document(propertyID)
.get();
String result = snapshot['propertyName'].toString();
return result;
}
| |
doc_23537681 | In my layout I have the following declaration:
<Button
android:id="@+id/dialog_new_database_button_cancel"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="@string/button_cancel"
android:maxLines="1"
style="?android:attr/buttonBarButtonStyle"
android:onClick="buttonCancel"
/>
Now my DialogFragment
import android.os.Bundle;
import android.support.v4.app.DialogFragment;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
public class DialogNewDatabase extends DialogFragment {
public DialogNewDatabase() {
// Empty constructor required for DialogFragment
super();
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
super.onCreateView (inflater, container, savedInstanceState);
View view = inflater.inflate(R.layout.dialog_new_database, container);
getDialog().setTitle("Hello");
return view;
}
@Override
public void onCreate(Bundle bundle) {
setCancelable(true);
setRetainInstance(true);
super.onCreate(bundle);
}
@Override
public void onDestroyView() {
if (getDialog() != null && getRetainInstance())
getDialog().setDismissMessage(null);
super.onDestroyView();
}
public void buttonCancel (View view) {
dismiss();
}
public void buttonOK (View view) {
}
}
I now when I try to click cancel button I get:
java.lang.IllegalStateException: Could not find a method buttonCancel(View) in the activity class android.view.ContextThemeWrapper for onClick handler on view class android.widget.Button with id 'dialog_new_database_button_cancel'
at android.view.View$1.onClick(View.java:3031)
at android.view.View.performClick(View.java:3511)
at android.view.View$PerformClick.run(View.java:14105)
at android.os.Handler.handleCallback(Handler.java:605)
at android.os.Handler.dispatchMessage(Handler.java:92)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4482)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:787)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:554)
at dalvik.system.NativeStart.main(Native Method)
Caused by: java.lang.NoSuchMethodException: buttonCancel [class android.view.View]
at java.lang.Class.getConstructorOrMethod(Class.java:460)
at java.lang.Class.getMethod(Class.java:915)
at android.view.View$1.onClick(View.java:3024)
... 11 more
Any idea? Is that perhaps somehow related with the fact I use import android.support.v4.app.DialogFragment (support v4)? How to solve that (I still would prefer to use android:onClick in xml layout).
A: I would try a different approach which works fine for me:
*
*implement an OnClickListener into your fragment:
public class DialogNewDatabase extends DialogFragment implements OnClickListener`
*have a button with an unique id in xml, which does NOT need android:clickable
<Button android:id="@+id/dialog_new_database_button_cancel" />
*override the method onClick() within your fragment and insert a reaction on your click:
public void onClick(View v) {
switch (v.getId()) {
case R.id.dialog_new_database_button_cancel:
// your stuff here
this.dismiss();
break;
default:
break;
}
}
*import the necessary:
import android.view.View.OnClickListener;
*start the onClickListener on the button:
private Button bCancel = null;
bCancel = (Button) findViewById(R.id.dialog_new_database_button_cancel);
bCancel.setOnClickListener(this);
// it is possible that you might need the refrence to the view.
// replace 2nd line with (Button) getView().findViewById(...);
This way you can handle even more clickable buttons in the same onClick-method. You just need to add more cases with the proper ids of your clickable widgets.
A: I don't think that is related to the support fragment.
The issue seems to arise from the fact that you are registering a onClick on XML that fires up based on the activity that the fragment was binded at the time of the click.
As your "buttonCancel" method does not exists in the activity (because it is inside the fragment), it fails.
I don't think that really is a desirable solution, but you can register your "buttonCancel" method on your activity for that error to go away, and make that "buttonCancel" method registered on the activity only call the method that exists in the fragment, in case you want to keep your action / view behaviour inside the fragment.
A: Try:
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
super.onCreateView (inflater, container, savedInstanceState);
View view = inflater.inflate(R.layout.dialog_new_database, container);
getDialog().setTitle("Hello");
return view;
private void buttonCancel (View view) {
dismiss();
}
private void buttonOK (View view) {
}
}
| |
doc_23537682 | I was trying to use fflush(stdin) to clear the keyboard buffer, this did not work either.
#include "pch.h"
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <Windows.h>
void main() {
char v;
int exit=1;
while (exit == 1) {
v = 0;
//Read what type of calcualtion the user wants to do.
printf("Type (+,-,*,/): ");
fflush(stdin);
scanf("%c", &v);
//system("cls");
//show the user for 2 sec what he chose
printf("Type you chose: %c", v);
Sleep(2000);
//system("cls");
//here the calcualtion will take place.
switch (v) {
case '+':
printf("\nTBD +");
break;
//Here are some more cases that i have excluded.
default:
printf("Please only use '+,-,*,/' above\n");
exit = 1;
break;
}
printf("\n do you want to repeat (1==yes|0==no): ");
scanf_s("%d", &v);
}
}
The result when this program runs looks like this:
Type (+,-,*,/): +
Type you chose: +
TBD +
do you want to repeat (1==yes|0==no): 1
Type (+,-,*,/): Type you chose:
Please only use '+,-,*,/' above
do you want to repeat (1==yes|0==no):
The result should look something like this:
Type (+,-,*,/): +
Type you chose: +
TBD +
do you want to repeat (1==yes|0==no): 1
Type (+,-,*,/): +
Type you chose: +
TBD +
do you want to repeat (1==yes|0==no): 1
A: There's a lot of issues with your code. For one, you didn't need the windows header or needed to use scanf_s. Also fflush(stdin) results in undefined behavior. You should clear the input stream yourself. As an alternative to scanf, use fgets or fgetc and perform the conversion yourself. The other issue in your code is you're resetting the value of v at the beginning of the loop. And then you give a default value of 1 to x, but check while(x==1) You're also trying to run the code at least once regardless of the initial condition and do while loops are a better alternative to while in this case. Also just for naming convention if you're continuing the loop at exit == 1, that's misleading. If exit == 1, then you should terminate the loop. it's very convoluted and confusing code. Let me try to clean it up for you.
int main() {
//We only need a size of 3 , 1 for character, 1 for null terminator,1 for carriage return
char v[32] = {0};
int exit = 0;
do{
//Read what type of calcualtion the user wants to do.
printf("Type (+,-,*,/): ");
fgets(v, sizeof(v), stdin);
//system("cls");
//show the user for 2 sec what he chose
printf("Type you chose: %c", *v);//dereference the pointer to the first character
//system("cls");
//here the calcualtion will take place.
switch (*v) {//dereference the pointer to the first character
case '+':
printf("\nTBD +");
break;
//Here are some more cases that i have excluded.
default:
printf("Please only use '+,-,*,/' above\n");
}
printf("\n do you want to exit (1==yes|0==no): ");
fgets(v, sizeof(v),stdin);
exit = atoi(v);
}while(exit != 1);
}
We give v a size of 32, although if we are only entering 1 character then a size of 3 is sufficient. Mainly because entering a single character with fgets will consume three bytes. But since we are taking an integer value at the end of the loop, we want to make sure there's enough room in the buffer. In case the user enters 123 for example, the buffer will still be fine and extra bytes will not remain in the stream.
A: To make your original code work
*
*remove the call to fflush(stdin) as fflush() is undefined for input streams.
*change scanf("%c", &v); to scanf(" %c", &v); (mind the space before the conversion specifier %c) to make scanf() skip leading whitespace.
*change scanf_s("%d", &v); to scanf_s("%d", &exit);. The compiler should have warned you about a type mismatch between conversion specifier %d and the argument &v (int* vs. char*). If it didn't you should increase the warning level of your compiler.
Possible implementation with error checking on input using scanf():
#include <stdio.h>
int main(void)
{
char keep_running;
do {
double first_operand;
while (printf("First operand: "), scanf("%lf%*[^\n]", &first_operand) != 1)
fputs("Input error. :(\n\n", stderr);
char op; // operator (not named "operator" in case a C++-compiler ever sees this file)
while (printf("Operation: "),
scanf(" %c%*[^\n]", &op) != 1 || (op != '+' && op != '-' && op != '*' && op != '/'))
{
fputs("Input error. :(\n\n", stderr);
}
double second_operand;
while (printf("Second operand: "), scanf("%lf%*[^\n]", &second_operand) != 1)
fputs("Input error. :(\n\n", stderr);
switch (op) {
case '+':
printf("\n%f %c %f = %f\n\n", first_operand, op, second_operand, first_operand + second_operand);
break;
case '-':
printf("\n%f %c %f = %f\n\n", first_operand, op, second_operand, first_operand - second_operand);
break;
case '*':
printf("\n%f %c %f = %f\n\n", first_operand, op, second_operand, first_operand * second_operand);
break;
case '/':
if(second_operand)
printf("\n%f %c %f = %f\n\n", first_operand, op, second_operand, first_operand / second_operand);
else fputs("\nDivision by zero is undefined. :(\n\n", stderr);
break;
}
while (printf("Do you want to repeat (y/n)? "),
scanf(" %c%*[^\n]", &keep_running) != 1 || (keep_running != 'n' && keep_running != 'y'))
{
fputs("Input error. :(\n\n", stderr);
}
puts("\n");
} while (keep_running == 'y');
}
*
*Please mind that the parameter list of functions that don't take any arguments should be void in C, hence int main(void).
*scanf() returns the number of successful assignments. Check that return value and handle input errors. Never trust the user.
*The conversion specifier %*[^\n] consumes all characters until a newline character is found and discards them. That way no garbage is left in the input buffer after scanf(). Note that this will consider successful conversions followed by garbage valid input. If you'd want to treat that as an input error you'd have to use more sophisticated methods.
| |
doc_23537683 | Thanks,
| |
doc_23537684 |
File naming uses the following convention:
{Path Prefix Pattern}/schemaHashcode_Guid_Number.extension
Example output files:
*
*Myoutput/20170901/00/45434_gguid_1.csv
*Myoutput/20170901/01/45434_gguid_1.csv
However, the following referenced variables do not appear to be explained in the documentation:
*
*schemaHashcode
*Guid
*Number
What do these variables refer to, and when can they change?
A: The GUID refers to the internal writer's uid. This is unique for each writer that gets created to write to the blob file. New writers are created based on partition and in the event of exceptions when the writer crashes. SchemaHashcode's value changes when a new schema in the incoming stream is observed. Hence you notice new files when the schema changes. Number refers to the index of the Blob block counter.
| |
doc_23537685 | As far as I understand the examples I've seen so far, the ProductCode denotes a specific version, so increasing the version also should change the product code. (Indeed the example above uses Product Id='*').
To understand this better, I am asking myself whether there is any scenario that would keep the ProductCode the same but increase the Version? What would Windows Installer do with such an MSI, given a previous one with a different ProductCode (but same UpgradeCode) were installed?
I guess another variation on my confusing would be: If I onnly want to do "major upgrades" does Id='*' make sense or will I have to control the ProductCode somehow?
A: If you were to rebuild your MSI file with updated files and increment the ProductVersion, then you have a minor update that you could install with a command line of REINSTALL=ALL REINSTALLMODE=vomus (typically) that would update the existing instaled product. This is rare, IMO.
If you didn't use that command line you'd get that "another version of this product is already installed" message (if the package code was new for the new MSI, as it should be).
If you only do major upgrades then yes you need a new ProductCode every time, and increment the ProductVersion somewhere in the first 3 fields.
A: IMO:
1) The MSI SDK doco is poorly written. It discusses the topic in a roundabout way without actually explaining your options.
2) The vast majority of MSI developers should use Major Upgrades which in WiX means Id="*", bump one of the first 3 fields in ProductVersion and author a MajorUpgrade element.
3) Minor upgrades are much stringent and error prone. You should be an expert in MSI and understand it and the component rules very well before deciding it's time for a Minor Upgrade. In other words you'll know when it's time.
FWIW when doing Major Upgrades "UpgradeCode" acts more like a ProductCode in that it's static. Think of UpgradeCode as a series of products and your ProductCode is always changing not because it's a new product per say but because MSI says you must change it to do a major upgrade.
Software gets refactored so much from build to build these days with so little functionality change that the whole description of major, minor and "small" (always disliked that one... who releases a product without changing the version number???) is pointless.
| |
doc_23537686 |
*
*Via the shell.
*http://gearman.org/
I am wondering which solution is the best.
Also, I have seen that in PHP.ini there is a memory limitation. I am wondering how this limitation will affect my "background" PHP script, and if I need more memory which solution is the best.
More details:
The script that will be working in the background will encrypt a file with the help of PHP and Kohana framework.
I am using Ubuntu 11.
A: Obviously, it depends on whether you need Gearman's features or not. It's doubtful there is some magic in Gearman that would make your actual script work better, so unless you need to do work in parallel, to load balance processing, and to call functions between languages, a simple shell_exec('... &') (or a cron job) is simply less work.
| |
doc_23537687 | Questions
*
*What is the entry point to validate the licence?
*What is the entry point to for a Asp.Net Area, to validate the licence for specific area?
| |
doc_23537688 | But is there a shortcut to filter only a set of uids to authenticate.
A: The LDAP search filter you could use is:
(|(uid=a)(uid=b)(uid=c)(uid=...))
But as noted in the comments, a group is much easier and more maintainable.
However if you cannot use a group, consider using an attribute of the users, like description, resulting in this filter:
(description=mediawiki)
| |
doc_23537689 | When I do something like
<div style="|"></div>
and hit ctrl+space, normally the content assist for the style attribute appears. This works fine for *.html and *.shtml files.
But as soon as I rename the file to phtml, the content assist just fails and says no completion available.
Anybody got the same trouble and solved it?
I don't really know if this is an eclipse or pdt or whatever problem, but its really annoying.
A: I faced the same problem when I used a Galileo workspace with Helios.
I solved the PDT autocomplete issue by deleting the following file:
<your workspace name>/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.dltk.ui.prefs
And restart helios.
I found the solution on the zend forum.
A: It's a bug in current PDT Release in Helios.
| |
doc_23537690 | I want to hide '.view-row' but now I'm hiding wrapper section mentioned in code.
(function($) {
function perspective_type() {
$(".perspective-list a").click(function(e) {
e.preventDefault();
$(".perspective-list a").parent().removeClass('active');
$('.wrapper .page-perspective').slice(0,3).show();
var href = $(this).attr('href');
$('.wrapper > :not(.' + href + ')').hide();
$('.wrapper > .' + href + '').slice(0,3).show();
$(this).parent().addClass('active');
});
$(".perspective-list a").mouseover(
function() {
$(".perspective-list a").removeClass('hover');
$(this).parent().addClass('hover');
});
$(".perspective-list a").mouseout(
function() {
$(".perspective-list a").each(function() {
$(this).parent().removeClass('hover');
});
});
$('#perspectives .perspectiveReadurl', '#page_perspectives .perspectiveReadurl').find('a').attr('target', '_blank');
}
jQuery(document).ready(function($) {
$('.Whitepapers').slice(0,4).show();
perspective_type();
});
})(jQuery)
.views-row{
height:50px;
border:1px solid red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="page_perspectives">
<div class="view view-page-perspectives view-id-page_perspectives">
<div class="perspective-list">
<ul class="nav nav-justified">
<li class="">
<a class="Blogs" href="Blogs">Blogs</a>
</li>
<li>
<a class="Case_Studies" href="Case_Studies">Case Studies</a>
</li>
<li class="active">
<a class="Whitepapers" href="Whitepapers">Whitepapers</a>
</li>
</ul>
</div>
<div class="view-content">
<div class="views-row views-row-1 views-row-odd views-row-first">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 1
</div>
</div>
</div>
<div class="views-row views-row-2 views-row-even">
<div class="wrapper">
<div class="page-perspective row Blogs" style="display: none;">
Blogs 1
</div>
</div>
</div>
<div class="views-row views-row-3 views-row-odd">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 2
</div>
</div>
</div>
<div class="views-row views-row-4 views-row-even">
<div class="wrapper">
<div class="page-perspective row Case_Studies" style="display: none;">
Case study 1
</div>
</div>
</div>
<div class="views-row views-row-5 views-row-odd">
<div class="wrapper">
<div class="page-perspective row Blogs" style="display: none;">
Blogs 2
</div>
</div>
</div>
<div class="views-row views-row-6 views-row-even">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 3
</div>
</div>
</div>
<div class="views-row views-row-7 views-row-odd views-row-last">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 4
</div>
</div>
</div>
</div>
</div>
</div>
A: As you mentioned that all codes are coming dynamically and you have some height for the views-row section, so you want to hide views-row section instead of inside element section. For this you have to do the following.
*
*When it is loading remove all the default style attribute for the inner elements.
*Now it becomes all the elements is visible, so hide all the views-row element.
*As per your requirement, slice the Whitepapers section and find it is respective parent of parent views-row and show it up.
*After clicking each link, you have to find the parent of parent and show/hide element.
I have implemented the above steps in the following Snippet.
(function($) {
function perspective_type() {
$(".perspective-list a").click(function(e) {
e.preventDefault();
$(".perspective-list a").parent().removeClass('active');
//$('.wrapper .page-perspective').slice(0,3).show(); /*Not sure what this line is doing. no need of this.*/
var href = $(this).attr('href');
$('.wrapper > :not(.' + href + ')').parent().parent().hide();
$('.wrapper > .' + href + '').slice(0,3).parent().parent().show();
$(this).parent().addClass('active');
});
$(".perspective-list a").mouseover(
function() {
$(".perspective-list a").removeClass('hover');
$(this).parent().addClass('hover');
});
$(".perspective-list a").mouseout(
function() {
$(".perspective-list a").each(function() {
$(this).parent().removeClass('hover');
});
});
$('#perspectives .perspectiveReadurl', '#page_perspectives .perspectiveReadurl').find('a').attr('target', '_blank');
}
jQuery(document).ready(function($) {
$('.wrapper .page-perspective').removeAttr('style');
$('.views-row').hide();
$('.Whitepapers').slice(0,4).parent().parent().show();
perspective_type();
});
})(jQuery)
.views-row{
height:50px;
border:1px solid red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="page_perspectives">
<div class="view view-page-perspectives view-id-page_perspectives">
<div class="perspective-list">
<ul class="nav nav-justified">
<li class="">
<a class="Blogs" href="Blogs">Blogs</a>
</li>
<li>
<a class="Case_Studies" href="Case_Studies">Case Studies</a>
</li>
<li class="active">
<a class="Whitepapers" href="Whitepapers">Whitepapers</a>
</li>
</ul>
</div>
<div class="view-content">
<div class="views-row views-row-1 views-row-odd views-row-first">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 1
</div>
</div>
</div>
<div class="views-row views-row-2 views-row-even">
<div class="wrapper">
<div class="page-perspective row Blogs" style="display: none;">
Blogs 1
</div>
</div>
</div>
<div class="views-row views-row-3 views-row-odd">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 2
</div>
</div>
</div>
<div class="views-row views-row-4 views-row-even">
<div class="wrapper">
<div class="page-perspective row Case_Studies" style="display: none;">
Case study 1
</div>
</div>
</div>
<div class="views-row views-row-5 views-row-odd">
<div class="wrapper">
<div class="page-perspective row Blogs" style="display: none;">
Blogs 2
</div>
</div>
</div>
<div class="views-row views-row-6 views-row-even">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 3
</div>
</div>
</div>
<div class="views-row views-row-7 views-row-odd views-row-last">
<div class="wrapper">
<div class="page-perspective row Whitepapers" style="display: none;">
Whitepaper 4
</div>
</div>
</div>
</div>
</div>
</div>
Here is the fiddle version.
| |
doc_23537691 | I added _vti_cnf [folder] to the Ignores List (and disabled the Config File so it didn't overwrite /overule the Tortoise settings)
But files in that folder still shows up in the list.
What's going on here?
A: It looks like you already have added them to the repository, so that they now shows as missing. Select the files in the working copy, do a right click and select TSVN->delete. Now commit this change. Now all new files in the folder are ignored.
| |
doc_23537692 | I've tried it with different mime types as well, but doesn't work. It does work however when I'm sending the input in JSON format instead of XML.
I'm using the following XML via postman
<Weather>
<City>London,uk</City>
<appid>b6907d289e10d714a6e88b30761fae22</appid>
<CIF>CIF20257</CIF>
</Weather>
And Configuration XML of my code in discussion is
<set-variable value="#[payload.Weather.City]" doc:name="Set Variable" doc:id="b98b3ec8-c1f7-436d-9bcf-49eb0ca8a033" variableName="test" mimeType="application/xml"/>
Error being displayed is
"javax.xml.stream.XMLStreamException - Trying to output non-whitespace characters outside main element tree (in prolog or
epilog), while writing Xml. Trace: at main (Unknown)" evaluating
expression: "payload.Weather.City".
A: There are two ways you can do a set Variable.
*
*using the set variable component
*is using a dataweave transformation.
I have used the second approach and i can see that i am able to set the variable
here is the complete code for the small sample application :
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core" xmlns:http="http://www.mulesoft.org/schema/mule/http"
xmlns="http://www.mulesoft.org/schema/mule/core"
xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd">
<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="23645d25-1194-4fcd-ae19-ffae9b9388f8" basePath="/play" >
<http:listener-connection host="localhost" port="8081" />
</http:listener-config>
<flow name="z_playFlow" doc:id="2ae13c16-4e1e-4203-96c3-9d372ce41c63" >
<http:listener doc:name="Listener" doc:id="9fa851c0-a05b-46e1-9ba4-f2433c80d67a" config-ref="HTTP_Listener_config" path="/setxml"/>
<set-payload value="<Weather>
<City>London,uk</City>
<appid>b6907d289e10d714a6e88b30761fae22</appid>
<CIF>CIF20257</CIF>
</Weather>" doc:name="Set Payload" doc:id="7d122f45-6025-4fb8-a7d4-e1ec0873f40b" mimeType="application/xml"/>
<ee:transform doc:name="Transform Message" doc:id="af6467e5-7177-403c-b9c0-62fb816b8f60" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="var" ><![CDATA[%dw 2.0
output application/xml
---
city: payload.Weather.City]]></ee:set-variable>
</ee:variables>
</ee:transform>
<logger level="INFO" doc:name="Logger" doc:id="8cbdcf0f-8b3e-4645-9475-887b9628bc05" message="#[payload]"/>
</flow>
</mule>
if you have question on how to define a variable via "transform message" component let me know and i can demonstrate that to you.
defining a variable within transform message
*
*when you pull in a transform message component the default output type is payload for that. like this
*click on the edit current target (pen) option which will open an selections dialog and under the output dropdown select Variable and supply a variable name:
A: I ran into this issue when moving from a Mule 3 mindset to Mule 4. The reason for your error is that the type of your variable is XML, but you are trying to write non-xml to it.
The output of payload.Weather.City is the string literal London,uk which isn't valid XML. There are a couple of options to resolve this.
*
*Output valid XML into the variable
<set-variable value="#[City: payload.Weather.City]" doc:name="Set Variable" variableName="test" />
This will set the value of the variable as <City>London,uk</City which is valid XML
*Change the type of the variable
If you are just hoping to store the String London,uk for use later, then you can explicitly set the output type of the set variable component to java.
<set-variable value="#[output application/java --- payload.Weather.City]" doc:name="Set Variable" variableName="test" />
| |
doc_23537693 | The code seems to work, but my problem is performance. I'm seeing that the full batch of dml statements take about 1 second per statement to execute. I'm updating several thousand records, so this job will take quite awhile to execute. So, what I'm looking for is any other ideas on how I can do this while maximizing performance.
Here's what I'm doing right now.
for(Referrer_UpdateSet i : referrerUpdateSet)
{
String dmlStatement = "INSERT INTO TempRefURL (firstTouchDate) " +
"(SELECT activityDateTime as firstTouch "+
"FROM referrer_URL_backup_10292014 "+
"WHERE mktPersonId = ? "+
"ORDER BY activityDateTime ASC LIMIT 1)";
stmt = mktoUTMConn.prepareStatement(dmlStatement);
stmt.setInt(1, i.id);
//System.out.println(stmt+" \n");
stmt.executeUpdate();
}
mktoUTMConn.commit();
I'm also trying preparedStatements.addBatch, but it doesn't seem to be working (only 1 row inserted..)
System.out.println("updating temp table with referrer URL data");
//iterate through array of parsed referrer URLs
String dmlStatement = "UPDATE dml_sandbox.TempRefURL SET Referrer_URL = ? " + "WHERE id = ?";
for(Referrer_UpdateSet i : referrerUpdateSet){
stmt = mktoUTMConn.prepareStatement(dmlStatement);
stmt.setInt(2, i.id);
stmt.setString(1, i.cleanURL);
//System.out.println(stmt+" \n");
stmt.addBatch();
//stmt.executeUpdate();
//System.out.println(stmt+" \n");
}
stmt.executeBatch();
System.out.println("Done updating temp table with referrer URL data");
mktoUTMConn.commit();
Any suggestions would be greatly appreciated. Thanks!
A: Simple fix. See my comment above. Here's the new code:
String dmlStatement = "UPDATE dml_sandbox.TempRefURL SET Referrer_URL = ? " + "WHERE id = ?";
stmt = mktoUTMConn.prepareStatement(dmlStatement);
//iterate through array of parsed referrer URLs
for(Referrer_UpdateSet i : referrerUpdateSet){
stmt.setInt(2, i.id);
stmt.setString(1, i.cleanURL);
stmt.addBatch();
stmt.executeUpdate();
}
System.out.println(stmt+" \n");
int[] recordsAffected = stmt.executeBatch();
System.out.println("Done updating temp table with referrer URL data");
System.out.println(recordsAffected.length + " records affected");
mktoUTMConn.commit();
| |
doc_23537694 | /usr/local/lib/libpcl_search.so\n/usr/local/lib/libpcl_sample_consensus.so\n/usr/local/lib/libpcl_io.so\n/usr/local/lib/libpcl_segmentation.so\n/usr/local/lib/libpcl_common.so\n/usr/local/lib/libboost_random.so\n/usr/local/lib/libboost_math_tr1l.so
That was output when running find / -name "*so" command with QProcess printed like this:
qDebug() << m_process->readAllStandardOutput();
I guess this is an encoding issue..?
A: the problem is caused because QDebug is going to show the endlines and similar characters because you are passing them a QByteArray, if you want to see the output you want then use qPrintable:
#include <QCoreApplication>
#include <QProcess>
#include <QDebug>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
QProcess process;
QObject::connect(&process, &QProcess::readyReadStandardOutput, [&process](){
qDebug()<< qPrintable(process.readAllStandardOutput());
});
process.start("find / -name \"*so\"");
return a.exec();
}
Output:
/snap/core/4917/lib/crda/libreg.so
/snap/core/4917/lib/i386-linux-gnu/ld-2.23.so
/snap/core/4917/lib/i386-linux-gnu/libBrokenLocale-2.23.so
/snap/core/4917/lib/i386-linux-gnu/libSegFault.so
/snap/core/4917/lib/i386-linux-gnu/libanl-2.23.so
/snap/core/4917/lib/i386-linux-gnu/libc-2.23.so
/snap/core/4917/lib/i386-linux-gnu/libcidn-2.23.so
...
| |
doc_23537695 | The typeface is saved to
Environment.SpecialFolder.LocalApplicationData
When i load it with Typeface.CreateFromFile(path) it doesn't show any exception or warning, but the label is just rendered with the default typeface.
Is it at all possible to load a .ttf file form outside the Assets folder?
A: Try this:
From Assets folder:
Typeface tf = Typeface.CreateFromAsset(Android.App.Application.Context.Assets, "sampleFontFamily.ttf");
Outside Assets folder, let's say from Resources (Resources->Font->myfont.ttf):
Typeface tf = ResourcesCompat.GetFont(Android.App.Application.Context, Resource.Font.myfont);
Apply this typeface object to your label.
A:
Environment.SpecialFolder.LocalApplicationData
You're saving your typeface file to internal storage(the files directory). The Files directory is a private directory that is only accessible by your application. Neither the user or the OS can access this file. You'll have to save the file in either Public External Storage or Private External Storage.
| |
doc_23537696 |
A: I don't know about a cancel payment call but you don't need it.
When the payment is approved by the sender you can call refund to revert it or execute to send money to secondary receiver(s). Before the sender approves the payment you don't need to cancel it just let it expire. You can control expiration by setting payKeyDuration on the pay call.
| |
doc_23537697 | When I only use an if loop, that portion seems to work. When I add the else, it ignores the if and just performs else.
Can I get some feedback on a better way to approach this?
public static void punchIn() throws IOException {
Scanner sc = new Scanner(System.in);
System.out.print("Enter Date and time (format MM/dd/yyyy HH:mm:ss): ");
String timeentry = sc.nextLine();
System.out.print("Enter the employee ID number: ");
String idnumber = sc.nextLine() + " ";
String inorout = "in";
System.out.println("The Punch-in date / time is: " + timeentry);
System.out.println("The employee ID number is: " + idnumber);
System.out.println("The employee is punched-" + inorout);
PunchinPunchoutData product = new PunchinPunchoutData();
product.setTimeentry(timeentry);
product.setIdnumber(idnumber);
product.setInorout(inorout);
productDAO.punchIn(product);
System.out.println();
System.out.print("Press enter to continue ");
sc.nextLine();
}
public static void punchOut() throws FileNotFoundException, IOException {
Scanner sc = new Scanner(System.in);
System.out.print("Enter Date and time (format MM/dd/yyyy HH:mm:ss): ");
String timeentry = sc.nextLine();
br = new BufferedReader(new FileReader("timeclock1.txt"));
String line = "";
System.out.print("Enter an employee ID number: ");
String idnumber = sc.next() + " ";//read the choice
sc.nextLine();// discard any other data entered on the line
while ((line = br.readLine()) != null) {
if (line.contains(idnumber + " ") && line.endsWith("in")) {
break;
}
else {
System.out.println("There is no punch-in record for ID number:
" + idnumber);
System.out.println("A punch-in entry must be saved first");
punchIn();
break;
}
}
String inorout = "out";
System.out.println("The Punch-out date / time is: " + timeentry);
System.out.println("The employee ID number is: " + idnumber);
System.out.println("The employee is punched-" + inorout + ".");
PunchinPunchoutData product = new PunchinPunchoutData();
product.setTimeentry(timeentry);
product.setIdnumber(idnumber);
product.setInorout(inorout);
productDAO.punchOut(product);
System.out.println();
System.out.print("Press enter to continue ");
sc.nextLine();
}
A: It seems you are reading a file line by line to check if the employee has a "punch in" record. In your code, you are calling punchIn the moment you have a line that does not belong to the employee, which will most probably add a "punch-in" everytime punchOut is called. You should loop over the whole file, and only call punchIn when no line in the file contains a record.
boolean foundPunchIn = false;
while ((line = br.readLine()) != null) {
if (line.contains(idnumber + " ") && line.endsWith("in")) {
foundPunchIn = true;
break;
}
}
if(!foundPunchIn) {
System.out.println("There is no punch-in record for ID number: " + idnumber);
System.out.println("A punch-in entry must be saved first");
punchIn();
}
String inorout = "out";
...
| |
doc_23537698 | There is issue with services container being built for test env. I literally didn't change anything expect APP_ENV and framework.test.
When I fetch service from test cached container I end up with:
Maximum function nesting level of '256' reached, aborting!
In stack trace I can see that symfony's DI keeps trying to fetch the same service:
...
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getPanel_Model_EventService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:483
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getDefaultEventRepositoryService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:525
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getDbReachingEventTranslationProviderService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:509
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getCachingEventTranslationProviderService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:541
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getEventContextTakingTranslatorService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:402
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getModelConfiguratorService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:1089
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getPanel_Model_EventService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:483
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getDefaultEventRepositoryService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:525
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getDbReachingEventTranslationProviderService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:509
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getCachingEventTranslationProviderService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:541
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getEventContextTakingTranslatorService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:402
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getModelConfiguratorService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:1089
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getPanel_Model_EventService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:483
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getDefaultEventRepositoryService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:525
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getDbReachingEventTranslationProviderService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:509
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getCachingEventTranslationProviderService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:541
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getEventContextTakingTranslatorService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:402
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getModelConfiguratorService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:1089
ContainerE6ODQnH\srcApp_KernelTestDebugContainer->getPanel_Model_EventService() at /var/www/html/panel/var/cache/test/ContainerE6ODQnH/srcApp_KernelTestDebugContainer.php:483
...
It's weird because I don't have circular reference in my definitions. In APP_ENV=dev everything is OK.
It looks like for some reason test container cannot remember references to existing services in $this->services property.
Do you know what are building differences of dev and test containers?
When I compare test container php file with dev version. They are indeed different. No reason why...
UPDATE
Here is example of generated service that is in the loop of invocations:
DEV
protected function getDefaultEventRepositoryService()
{
$a = \ClassRegistry::init('Event');
$this->services['Panel\\Events\\Repository\\DefaultEventRepository'] = $instance = new \Panel\Events\Repository\DefaultEventRepository($a, ($this->services['Panel\\Events\\Repository\\EventMapper'] ?? ($this->services['Panel\\Events\\Repository\\EventMapper'] = new \Panel\Events\Repository\EventMapper())), ($this->privates['timeProvider'] ?? ($this->privates['timeProvider'] = new \Panel\Core\Utils\CurrentTimeProvider())));
($this->services['CakeFramework\\ModelConfigurator'] ?? $this->getModelConfiguratorService())->configure($a);
return $instance;
}
TEST
protected function getDefaultEventRepositoryService()
{
$a = $this->getPanel_Model_EventService();
if (isset($this->services['Panel\\Events\\Repository\\DefaultEventRepository'])) {
return $this->services['Panel\\Events\\Repository\\DefaultEventRepository'];
}
return $this->services['Panel\\Events\\Repository\\DefaultEventRepository'] = new \Panel\Events\Repository\DefaultEventRepository($a, ($this->services['Panel\\Events\\Repository\\EventMapper'] ?? ($this->services['Panel\\Events\\Repository\\EventMapper'] = new \Panel\Events\Repository\EventMapper())), ($this->privates['timeProvider'] ?? ($this->privates['timeProvider'] = new \Panel\Core\Utils\CurrentTimeProvider())));
}
As you can see above there is slight difference. Test environment is using service method getPanel_Model_EventService. But in development environment it is injected directly $a = \ClassRegistry::init('Event');
This is causing circural reference despite service definition is the same. There is no additional service_test files. Any idea why?
A: I investigated this problem. It's caused by Symfony's configurator mechanism and lack of reporting circular dependencies.
I described this problem with code examples in public repo: https://github.com/kamilwylegala/symfony-configurator-circular-dependency
Switching to factory helped to figure out that there indeed is circular dependency. Using factory raises:
PHP Fatal error: Uncaught Symfony\Component\DependencyInjection\Exception\ServiceCircularReferenceException: Circular reference detected for service ...
| |
doc_23537699 |
A: You cannot timeout or stop jobs in Sidekiq. This is dangerous and can corrupt your application data.
Sounds like you only have one Sidekiq process with concurrency of 1. You can start multiple Sidekiq processes and they will work on different jobs and you can increase concurrency to do the same.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.