id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23538000
|
Have also tried just DBMS_LOCK instead of SYS.DBMS_LOCK
SQL> GRANT EXECUTE ON SYS.DBMS_LOCK to myuser;
GRANT EXECUTE ON SYS.DBMS_LOCK to myuser
*
ERROR at line 1:
ORA-04042: procedure, function, package, or package body does not exist
sqlplus "sys/ChangeMe123! AS SYSDBA"
Note - other grants worked
SQL> GRANT ALTER SESSION TO myuser;
Grant succeeded.
SQL> GRANT CREATE PROCEDURE TO myuser;
Grant succeeded.
SQL> GRANT CREATE SEQUENCE TO myuser;
Grant succeeded.
SQL> GRANT CREATE SESSION TO myuser;
Grant succeeded.
SQL> GRANT CREATE MATERIALIZED VIEW TO myuser;
Grant succeeded.
SQL> GRANT CREATE TABLE TO myuser;
Grant succeeded.
SQL> GRANT CREATE TRIGGER TO myuser;
Grant succeeded.
SQL> GRANT CREATE VIEW TO myuser;
Grant succeeded.
SQL> GRANT CREATE ANY SYNONYM TO myuser;
Grant succeeded.
SQL> GRANT DROP ANY SYNONYM TO myuser;
Grant succeeded.
SQL> GRANT SELECT ANY DICTIONARY TO myuser;
Grant succeeded.
SQL> GRANT EXECUTE ON DBMS_LOCK to myuser;
GRANT EXECUTE ON DBMS_LOCK to myuser
A: DBMS_LOCK.SLEEP was deprecated replaced with DBMS_SESSION.SLEEP but still available in 19c for backwards compatibility.Verify if object exists
SQL> select object_name,object_type,owner from dba_objects
2 where object_name='DBMS_LOCK';
OBJECT_NAME OBJECT_TYPE OWNER
------------------------------ ----------------------- ------------------------------
DBMS_LOCK PACKAGE SYS
DBMS_LOCK PACKAGE BODY SYS
DBMS_LOCK SYNONYM PUBLIC
If above query returns nothing then run the dbmslock script as a sysdba that creates above package
sql>@?/rdbms/admin/dbmslock
| |
doc_23538001
|
Why does the project has both bootstrap.css and bootstrap-theme.css ?
Which one should I replace when I want the replace the theme ?
P.S. Note that this question is about LESS and this one has an answer related to Bootstrap 2.
A: Because bootstrap has two files almost always. bootstrap-theme.css contains optional theme.
Quoting http://getbootstrap.com/getting-started/
Fonts from Glyphicons are included, as is the optional Bootstrap theme.
| |
doc_23538002
|
Error: unable to read property list from file: /Users/myname/Developer/appname/ios/Runner/Info.plist:
The operation couldn't be completed. (XCBUtil.PropertyListConversionError error 1.)
This is my Info.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>LSApplicationQueriesSchemes</key>
<array>
<string>https</string>
<string>http</string>
<string>tel</string>
<string>mailto</string>
</array>
<key>CFBundleDevelopmentRegion</key>
<string>$(DEVELOPMENT_LANGUAGE)</string>
<key>CFBundleExecutable</key>
<string>$(EXECUTABLE_NAME)</string>
<key>CFBundleIdentifier</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>apitesting</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>$(FLUTTER_BUILD_NAME)</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>$(FLUTTER_BUILD_NUMBER)</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>UILaunchStoryboardName</key>
<string>LaunchScreen</string>
<key>UIMainStoryboardFile</key>
<string>Main</string>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationLandscapeLeft</string>
<string>UIInterfaceOrientationLandscapeRight</string>
</array>
<key>UISupportedInterfaceOrientations~ipad</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationPortraitUpsideDown</string>
<string>UIInterfaceOrientationLandscapeLeft</string>
<string>UIInterfaceOrientationLandscapeRight</string>
</array>
<key>UIViewControllerBasedStatusBarAppearance</key>
<false/>
<key>io.flutter.embedded_views_preview</key>
<String>YES<String>
</dict>
</plist>
The app runs fine on VS Code via an Android Emulator but when I ran the project in Xcode, it gave this error. I made some edits myself when I had to configure it a little bit:
The LSApplicationQueriesSchemes and the "https, http, tel, mailto" that was added by me because I needed it for one of my packages.
A: Ok, I found the error. The dependency that I was using (WebView) made an error where they capitalized String when it should've been not capitalized. Also they forgot to put quotation marks around the YES string.
| |
doc_23538003
|
//file Globals.cs in App_Code folder
public class Globals
{
public static string labelText = "";
}
and a simple aspx page which has textbox, label and button. The CodeFile is:
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
Label1.Text = Globals.labelText;
}
protected void Button1_Click1(object sender, EventArgs e)
{
Globals.labelText = TextBox1.Text;
}
}
That is when I click on the button the Globals.labelText variable initializes from the textbox; the question is: why when I open this page in another browser the label has that value, which I set by the first browser, that is the static member is common for the every users. I thought that the every request provides in the individual appDomain which created by the individual copy of IIS process. WTF?
A: Yes you may use static variable to store application-wide data but it is not thread-safe. Use Application object with lock and unlock method instead of static variables.
Take a look at ASP.NET Application Life Cycle Overview for IIS 7.0 and ASP.NET Application Life Cycle Overview for IIS 5.0 and 6.0
A: No, static in this case is static in that manner only for the lifecycle of the process the request lives on. So this variable will be static the entire time you're processing a single request. In order to have a "static" variable in the manner you describe, you'd have to make it an application variable. Something like this:
//file Globals.cs in App_Code folder
public class Globals
{
// I really recommend using a more descriptive name
public static string LabelText
{
get
{
return Application("LabelText") ?? string.Empty;
}
set
{
Application("LabelText") = value;
}
}
}
By making it an application variable it should survive multiple page requests. A vulnerability it has though is that it will not survive an application pool recycle, and for large applications this can be problematic. If you truly want this variable to behave in a static manner reliably you're probably better off storing its state in a database somewhere.
| |
doc_23538004
|
I have the following array of objects, the array is sorted by IP field:
let array = [{ 'ip': '192.168.0.1' }, { 'ip': '192.168.0.4'}, { 'ip': '192.168.0.10'}, { 'ip': '192.168.0.50'}, { 'ip': '192.168.0.60'}, ];
Now I would like to insert a new object to the array:
const newObject = { ip: '192.168.0.13' };
How can I add this new object to the correct position?
So that my array then looks like:
let array = [{ 'ip': '192.168.0.1' }, { 'ip': '192.168.0.4'}, { 'ip': '192.168.0.10'}, { ip: '192.168.0.13' }, { 'ip': '192.168.0.50'}, { 'ip': '192.168.0.60'}, ];
How could I find the correct index, where to insert the item ?
A: You can use splice and set the delete count to 0 to add at a specific position but this would mutate the original array.
The same can be done with slice and spread operator but it would not mutate the original array.
To find the exact position where you need to insert, you can pass a predicate to the findIndex function. The findIndex function would the right index where you need to insert the new object.
The predicate here would be an IP comparison function. In my example I made a crude IP comparing function which does the job:
let array = [{ 'ip': '192.168.0.1' }, { 'ip': '192.168.0.4'}, { 'ip': '192.168.0.10'}, { 'ip': '192.168.0.50'}, { 'ip': '192.168.0.60'}, ];
const newObject = { ip: '192.168.0.13' };
const compareIp = (ip1, ip2) => {
const arr1 = ip1.split(".").map(n => +n);
const arr2 = ip2.split(".").map(n => +n);
return arr1.every((e, i) => e <= arr2[i]);
}
//Not mutating original array
const insertAtPosition = (arr, obj) => {
const idx = arr.findIndex(o => compareIp(obj.ip, o.ip));
return [...array.slice(0, idx), obj,...array.slice(idx)]
}
//Mutating original array
const insertAtPositionMutated = (arr, obj) => {
const idx = arr.findIndex(o => compareIp(obj.ip, o.ip));
arr.splice(idx, 0, obj);
return arr
}
console.log(insertAtPosition(array, newObject));
console.log(insertAtPositionMutated(array, newObject));
A: You can try this
array.push(newObject);
And after that sort it again.
A: Using Array.findIndex, you can find the position to be inserted into the current array and using Array.splice, you can insert ip address to that position.
let array = [{ 'ip': '192.168.0.1' }, { 'ip': '192.168.0.4'}, { 'ip': '192.168.0.10'}, { 'ip': '192.168.0.50'}, { 'ip': '192.168.0.60'}, ];
const newObject = { ip: '192.168.0.13' };
const newIpArr = newObject.ip.split(".").map(item => parseInt(item));
const insertPos = array.findIndex(({ ip }) => {
const ipArr = ip.split(".").map(item => parseInt(item));
for (let index = 0; index < ipArr.length; index ++) {
if (ipArr[index] > newIpArr[index]) {
return true;
}
}
return false;
});
array.splice(insertPos, 1, newObject);
console.log(array);
A: Another answer using findIndex and splice but with a different way to compare the IP-addresses.
By converting them into a single 32bit number. imo. simpler to deal with than an array of 4 bytes.
let array = [
{ 'ip': '192.168.0.1'},
{ 'ip': '192.168.0.4'},
{ 'ip': '192.168.0.10'},
{ 'ip': '192.168.0.50'},
{ 'ip': '192.168.0.60'},
];
const newObject = { ip: '192.168.0.13' };
const foldIp = (a,b) => a<<8 | b;
const ipValue = item => item.ip.split(".").reduce(foldIp, 0);
const newObjectValue = ipValue(newObject);
const index = array.findIndex(item => ipValue(item) > newObjectValue);
if (index === -1) {
array.push(newObject);
} else {
array.splice(index, 0, newObject);
}
console.log(array);
// or use it to sort();
console.log(
"sort descending:", // as the array is currently sorted ascending
array.sort((a,b) => ipValue(b) - ipValue(a))
);
.as-console-wrapper{top:0;max-height:100%!important}
Added both conversions
const IP = {
// 3232235521 -> '192.168.0.1'
toString(value) {
return `${value >>> 24}.${value >> 16 & 0xFF}.${value >> 8 & 0xFF}.${value & 0xFF}`
},
// '192.168.0.1' -> 3232235521
toUint(value) {
const arr = value.split(".");
return (arr[0] << 24 | arr[1] << 16 | arr[2] << 8 | arr[3]) >>> 0;
}
}
| |
doc_23538005
|
array([[ 0. , 0. , 0. , 0.86826141, 0. ,
0. , 0.88788426, 0. , 0.4089203 , 0.88134901],
[ 0. , 0. , 0.46416372, 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0.31303966,
0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. , 0.3155742 , 0. , 0.64059294, 0. ],
[ 0. , 0. , 0. , 0. , 0.51349938,
0. , 0. , 0. , 0.53593509, 0. ],
[ 0. , 0.01252787, 0. , 0.6870415 , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0.16643105, 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0.08626592, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0.66939531],
[ 0.43694586, 0. , 0. , 0. , 0. ,
0.95941661, 0. , 0.52936733, 0.79687149, 0.81463887]])
b is generated using A.dot(np.ones(10)). Now I wanted to solve this using lu factorization and for that I did this following
lu_fac=scipy.linalg.lu_factor(X)
scipy.linalg.lu_solve(lu_fac,b)
Which gives
array([ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan])
Also lu_factor seems to be working fine in this case(some time it does give run time warning saying "Diagonal number %d is exactly zero. Singular matrix"). For completeness here is the code for verifying PLU from lu_factor is same as A :
L=np.tril(lu_fac[0])
np.fill_diagonal(L,1)
U=np.triu(lu_fac[0])
perm=np.arange(10)
ipiv=lu_factor[1]
for i in range(10):
temp=perm[i]
perm[i]=perm[ipiv[i]]
perm[ipiv[i]]=temp
np.allclose(X[perm,:],L.dot(U))
Now I know my matrix is singular and there are infinitely many solutions to my problem. But I am interested in any solution and I am just confused why lu factorization fails, can't it set free variables to 0 and find some solution as we are taught? Also what is the deal with the run time warning "Diagonal number %d is exactly zero. Singular matrix". Note I am not interested in svd/qr approach to solve this, I am just curious to know why lu fails for singular matrices. Any suggestions are greatly appreciated. Thanks.
A: 0 / lu_fac[0][9, 9]
returns nan because that entry - the last diagonal entry of U, is zero. So this nan becomes the value of the 9-th variable. Then it's substituted in the equations above, and naturally, the rest comes out as nan, too. SciPy's LU code, or rather the Fortran code it wraps, is not designed for rank-deficient matrices, so it is not going to make up values for the variables that can't be determined.
Also what is the deal with the run time warning "Diagonal number %d is exactly zero. Singular matrix".
The warning is clear: the algorithm detected a singular matrix, which is not expected. It also tells you that the implementation is not intended for use with singular matrices.
have vector b which is in range space of A
That's theoretically. In practice, one can't be sure about anything being in the range space of a rank-deficient matrix because of the errors inherent in the floating point arithmetics. You can compute b = A.dot(...) and then try to solve Ax=b, and there won't be a solution because of the errors introduced when manipulating floating point numbers.
By the way: you mentioned that PLU factorization exists for every square matrix, but SciPy is not necessarily designed to compute it. For example,
scipy.linalg.lu_factor(np.array([[0, 1], [0, 0]]))
returns a matrix with NaNs. In your case, NaN appear later, when attempting to find a solution and encountering a zero diagonal element of factor U.
A: As mentioned here, a matrix has an LU factorization if and only if rank(A11) + k >= rank([A11 A12]) + rank([A11 A21]). In your case, rank(A11) = 3, k = 5,
and rank([A11 A12]) + rank([A11 A21]) = 9. So you matrix does not satisfy the conditions and does not have an LU factorization.
| |
doc_23538006
|
public async Task UpdateBaseEntityTypeAsync(Guid id, Guid baseEntityId, IEnumerable<string> type)
{
await UpdateAsync(id, baseEntityId, async x => x.UpdateType(type));
}
public async Task UpdateBaseEntityNameAsync(Guid id, Guid baseEntityId, string name)
{
await UpdateAsync(id, baseEntityId,
async x => await x.UpdateNameAsync(name)).ConfigureAwait(false);
}
public async Task UpdateAsync(Guid aggregatorId, Guid baseEntityId, Func<BaseEntity, Task> updateAsync)
{
var someAggregator = await someAggregator.TryFindById(aggregatorId).ConfigureAwait(false);
var baseEntity = someAggregator.ListBaseEntities()
.FirstOrDefault(x => x.Id == baseEntityId);
await updateAsync.Invoke(baseEntity).ConfigureAwait(false);
await aggregatorRepo.Save(someAggregator).ConfigureAwait(false);
}
I have done something like this, but I think it looks like crap:
public class BaseEntityDTO
{
public Guid Id { get; set; }
public string Name { get; set; }
public IEnumerable<string> Type { get; set; }
}
public async Task UpdateAsync(Guid aggregatorId, List<BaseEntityDTO> baseEntityDTO)
{
var someAggregator = await someAggregator.TryFindById(aggregatorId).ConfigureAwait(false);
foreach (var intent in intentsToEdit)
{
var baseEntity = someAggregator.ListBaseEntities()
.FirstOrDefault(x => x.Id == baseEntityDTO.Id);
if (baseEntityDTO.Type != null)
{
baseEntity.UpdateType(baseEntityDTO.Type);
}
if (baseEntityDTO.Name != null)
{
await baseEntity.UpdateNameAsync(baseEntityDTO.Name).ConfigureAwait(false);
}
await aggregatorRepo.Save(someAggregator).ConfigureAwait(false);
}
}
Can it be done in some other way?
| |
doc_23538007
|
I want to access products[] array.
I can access console.log(shoppingcart[0].products[0]); // {productId: 1111, quantity: 3, price: 3}
but I do not know how to get the average value of the price in the nested product[] array.
1) products: [
{ productId: 1111, quantity: 3, price: 3.0 }
]
2) products: [
{ productId: 1111, quantity: 1, price: 3.0 },
{ productId: 1112, quantity: 1, price: 1.0 }
]
output should be
price: 3.0
price: 3.0
price: 1.0
and
3 + 3 + 1 = 7 / 3
average: 2.33
This is my code below.
let shoppingcart = [
{
orderId: 100000,
dayofWeeks: "May.06.2020",
products: [
{ productId: 1111, quantity: 3, price: 3.0 }
]
},
{
orderId: 100001,
dayofWeeks: "Aug.12.2020",
products: [
{ productId: 1111, quantity: 1, price: 3.0 },
{ productId: 1112, quantity: 1, price: 1.0 }
]
},
];
A: .flatMap is what you want here.
let shoppingCart = [
{
orderId: 100000,
dayofWeeks: "May.06.2020",
products: [
{ productId: 1111, quantity: 3, price: 3.0 }
]
},
{
orderId: 100001,
dayofWeeks: "Aug.12.2020",
products: [
{ productId: 1111, quantity: 1, price: 3.0 },
{ productId: 1112, quantity: 1, price: 1.0 }
]
},
];
const prices = shoppingCart.flatMap(c => c.products.map(p => p.price));
const average = prices.reduce((a,b) => a + b) / prices.length;
console.log(`average: ${average}`);
| |
doc_23538008
|
if(answer == q.getAnswer()){
scoreTxt.setText("Score: "+(putScore+1));
correct = true;
}else if(answer != q.getAnswer()){
setHighScore();
scoreTxt.setText("Score: 0");
A: There are several options. One example is disabling the button after it has been clicked in the OnClickListener:
button.setEnabled(false);
Don't forget to enable the button once moving on to the next question (I'm assuming your game has questions and answers).
A: You're marking a bool as true. Why not use it to make sure the check can only succeed once?
if(answer == q.getAnswer() && !correct) {
| |
doc_23538009
|
import turtle
t = turtle.Turtle()
def drawTriangle(t, side):
t.forward(side)
t.left(120)
for x in range (3):
drawTriangle(t, 100)
drawTriangle()
A: Here is a basic triangle, if you have a function called drawTriangle then it kind of makes sense to make it draw a triangle rather than something that you have to call three times to get a triangle
import turtle
def drawTriangle(t, side):
for _ in range(3):
t.forward(side)
t.left(120)
t = turtle.Turtle()
drawTriangle(t, 200)
Not sure what you mean by lines across, so if you edit the question to make that clearer then hopefully I will notice in time to edit the answer to add that part - okay now I see the picture, coming up.......
This will do.......
import turtle
def drawTriangle(t, side):
for _ in range(3):
t.forward(side)
t.left(120)
t = turtle.Turtle()
t.penup()
t.setheading(-120)
t.setposition(0, 100)
t.pendown()
for side in range(40, 240, 40):
drawTriangle(t, side)
A: Here's a bare bones solution that operates on an isosceles triangle and doesn't necessarily slice the entire triangle evenly (as in the OP's illustration):
from turtle import Screen, Turtle
SHOWN, TOTAL = 5, 7 # five of seven equal slices will be shown
WIDTH, HEIGHT = 540, 270 # dimensions of (isosceles) triangle
screen = Screen()
turtle = Turtle()
for n in range(TOTAL - SHOWN + 1, TOTAL + 1):
ratio = n / TOTAL
turtle.goto(WIDTH/2 * ratio, -HEIGHT * ratio)
turtle.setx(-WIDTH/2 * ratio)
turtle.home()
turtle.hideturtle()
screen.exitonclick()
A different approach is to use stamping instead of drawing. Not necessarily simpler in this case but it might be easier for some folks to visualize drawing entire triangles in one stroke:
from turtle import Screen, Turtle
SHOWN, TOTAL = 5, 7 # five of seven equal slices will be shown
WIDTH, HEIGHT = 540, 270
DELTA = ((TOTAL - 1) + HEIGHT / TOTAL) / 2 # a bit of fudging here
CURSOR_SIZE = 20
screen = Screen()
screen.mode('logo')
turtle = Turtle()
turtle.hideturtle()
turtle.shape('triangle')
turtle.fillcolor('white')
for n in range(TOTAL, TOTAL - SHOWN, -1):
ratio = n / TOTAL
turtle.shapesize(ratio * WIDTH / CURSOR_SIZE, ratio * HEIGHT / CURSOR_SIZE)
turtle.stamp()
turtle.forward(DELTA)
screen.exitonclick()
| |
doc_23538010
|
JSON
{
"strA": "MyStr",
"Street": "1st Lane",
"Number": "123"
}
POJO
@JsonIgnoreProperties(ignoreUnknown = true)
public class ClassA {
@JsonProperty("strA")
private String strA;
private Address address;
//Constructor, getter,setter
@JsonRootName("Address")
@JsonIgnoreProperties(ignoreUnknown = true)
public class Address {
private String address;
public Address() {
}
public String getAddress() {
return address;
}
@JsonAnySetter
public void setAddress(@JsonProperty("Street") String street, @JsonProperty("Number")String number) {
this.address = number + " " + street;
}
}
Now, Address is properly created from the sample JSON (only made it work with the JsonAnySetter unfortunately), but I can't get ClassA to be created properly.
I've tried annotating the Address property of it, but to no avail.
How can I achieve this in a "simple" way? This is important as this example is simple, but my real use cases involves several composed classes that need information from the JSON root + from complex JSON elements with different names.
Thank you for your time.
| |
doc_23538011
|
The reason why I ask, is that I'm using dotfuscator & runtime intelligence, so I need to build, dotfuscate, then deploy with AppDep, but then my application data is gone. I realize that I could get around this by setting up the dotfuscator to run via command line in the post-build scripts, and then deploy w/ VS, but for now let's assume that I don't want to do that.
Does anybody know how to deploy a XAP the way VS 2010 does it, so that it doesn't erase Isolated Storage? I'm hoping there's a command line program I can run.
A: AFAIK there is no way around this. The deployment tool first uninstalls and then re-installs the app, so isolated storage gets wiped. The same is true when deploying from Visual Studio after selecting the Rebuild All or Clean Build -> Build options.
There are a couple of ways around this:
*
*Use Isolated Storage Explorer, this will let you browse, upload and download files to isolated storage.
*Setup a WCF / web service service to connect to and transfer files to and from the app
*Write a bunch of initialization code in the app that can be triggered to create all the files you need
A: I've generally been checking for DEBUG and then running a method that sets up my test data, etc.
ie-
#if DEBUG
SetupTestData();
#endif
//load like normal now that test data's set up.
To make this really easy, on occasion, I've also just used the app to generate the data I want to use from then on, then Debug.WriteLine(...) my XML or similar method to dump it.
| |
doc_23538012
|
#a is in base 10
In [143]: a
Out[143]: 536899058
Usually, I would do a bit-wise AND between the number and a bit-mask.
# 11111111 11111111 00000000 00000000 is 4294901760 in base 10
In [145]: a & 4294901760
Out[145]: 536870912L
In this particular case, are there any disadvantages in shifting the number to the right by 16 bits, and then shifting it back to the left by 16 positions?
In [146]: (a >> 16) << 16
Out[146]: 536870912
A: Aside from obfuscating your intent and taking more than one instruction, no.
If you want to be 100% sure, feed your query into an SMT solver that can understand bitvectors and binary operators, like Z3. It'll prove whether or not the statements are equal (spoiler: they are); An online repl is here.
x = BitVec('x', 32)
prove(x & 0xFFFF0000 == ((x >> 16) << 16))
| |
doc_23538013
|
<application android:debuggable="true" ...> in AndroidManisfest.xml file.
Oddly, for one app the LogCat prints all actions and for the other the LogCat remains empty.
I can't realize why that is happening.
Help please.
A: Open the Devices view in Eclipse by pressing ctrl+3 and write "devices".
When you launch each app, go to the Devices view and choose the device/app.
Now go back to the LogCat view and see if you see any logs.
This question has got zero to do with Worklight. I have edited the question accordingly.
| |
doc_23538014
|
select p.id,
p.name,
-- other columns from joined tables
decode(get_complicated_number(p.id), null, null, "The number is: " || get_complicated_number(p.id)))
from some_table p
-- join other tables and WHERE clause
It includes get_complicated_number call which queries multiple tables. I wasn't able to write it as a JOIN statement that would be as fast and as easy to maintain as a separate function so far.
Currently the function is called twice in case its return value is not NULL.
In reality I have an XML generation package that gets the data with a select:
select distinct xmlAgg
(
xmlelement
(
"TestElement",
xmlelement("Id", p.id),
xmlelement("Name", p.name),
-- other elements from joined tables
decode(get_complicated_number(p.id), null, null, xmlelement("ComplicatedNum", get_complicated_number(p.id)))
)
)
from some_table p
-- join other tables and WHERE clause
Is there a way to make it only one call and still avoid creating an empty element on NULL?
A: You can use WITH Syntax (Common Table Expressions) as:
with complicated_number as (
select get_complicated_number(p.id) as num from some_table p
) select distinct xmlAgg
--...
decode(complicated_number.num, null, null, xmlelement("ComplicatedNum", complicated_number.num))
from complicated_number
common table expression (CTE) is a named temporary result set that exists within the scope of a single statement and that can be referred to later within that statement, possibly multiple times
A: user7294900's answer is good, but if it's hard to combine with your existing joins, here's an alternate version with an inline view instead of a CTE.
select distinct xmlAgg
(
xmlelement
(
"TestElement",
xmlelement("Id", p2.id),
xmlelement("Name", p2.name),
-- other elements from joined tables
decode(p2.num, null, null, xmlelement("ComplicatedNum", p2.num))
)
)
from (
select p.id, p.name, get_complicated_number(p.id) as num
from some_table p
) p2
-- join other tables to p2. or put them inside it.
If you want help with adding your existing joins to these example queries, you might need to edit your question and add your other tables and WHERE clauses.
| |
doc_23538015
|
I can't provide more details as I'm not clear with this.
Any suggestions welcome.
A: There is such an extension for flask already: Flask-RBAC. If you want to code one yourself you should inspect the code of the existing one. Or you can use it in your applications.
A: You can use Casbin. Casbin supports both PHP (PHP-Casbin) and Python (PyCasbin). It also has the Yii middleware and Flask middleware. Here's the FLask middleware: https://github.com/pycasbin/flask-authz
| |
doc_23538016
|
For example if I have a "Persons" table with "ID", "First Name", "Last Name", "Age" as attributes, I want to get transpose of a row in "Persons" table with following two columns:
Column_name, Column_value
I can get the column names of table using:
SELECT *
FROM `INFORMATION_SCHEMA`.`COLUMNS`
WHERE `TABLE_SCHEMA`='databasename'
AND `TABLE_NAME`='tablename';
I tried to get values of column names using:
select attributes.`COLUMN_NAME`, person.attributes.`COLUMN_NAME` as `Column_Value`
from (select * from Persons where ID=1) as person,
(SELECT * FROM `INFORMATION_SCHEMA`.`COLUMNS`
WHERE `TABLE_SCHEMA`='databasename' and `TABLE_NAME`='tablename');
But the second parameter is also giving the column names instead of values.
How to resolve this issue.
A: You can use a stored procedure with prepared statement
DROP PROCEDURE IF EXISTS proc;
DELIMITER $$
CREATE PROCEDURE proc()
BEGIN
DECLARE query longtext;
SELECT
GROUP_CONCAT( CONCAT("(select '",column_name,"' as col_name,",concat("person.",column_name)," as col_val
FROM (select * from test.game_action where id=1) as person,(SELECT * FROM `INFORMATION_SCHEMA`.`COLUMNS`
WHERE `TABLE_SCHEMA`='test' and `TABLE_NAME`='game_action')as t)") SEPARATOR ' union ') into query
FROM (select * from test.game_action where id=1) as person,(SELECT * FROM `INFORMATION_SCHEMA`.`COLUMNS`
WHERE `TABLE_SCHEMA`='test' and `TABLE_NAME`='game_action')as t;
SET @s = query;
PREPARE stmt FROM @s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END
and you can pass the id as parameter if you want
| |
doc_23538017
|
A: If you are not using identity, then you may need to use cookies for that.
*
*Keep a checkbox in login page for Remember me.
*If the checkbox is checked, keep a cookie with some identifier which is in turn stored to the db.
*Next time when anyone comes to the login page, check for this cookie identifer in the database and if it exists, login
automatically.
You can read more by doing a quick search on how to remember user using cookies.
| |
doc_23538018
|
When the container gets started, I want to execute a certain initialization script (init.bat) but also want to keep the user logged into the container session (in cmd).
With this dockerfile:
FROM windowsservercore
ADD sources /init
ENTRYPOINT C:/init/init.bat
and this init.bat (which is supposed to run inside the container on startup):
mkdir C:\myfolder
echo init end
and this startup call for the container:
docker run -it test/test cmd
the init.bat batch file gets executed inside the container, but the user does not stay logged in the container, but the container exits (with exit code 0).
I don't quite understand why it exits. From how I understand the docker documentation:
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get
appended as arguments to the ENTRYPOINT.
the cmd command should get appended to the entrypoint, which is my init script, but it doesn't.
I also tried this syntax, but it does not make a difference.
ENTRYPOINT ["C:/init/init.bat"]
If I remove the ENTRYPOINT from the dockerfile and start the container with the cmd command, I stay in the session and I can of course run the init.bat script manually and it works, but I want it to run automatically.
When I work with Ubuntu containers, I usually use supervisord to execute any initialization scripts, and bin/bash (which equivalents to cmd on Windows) as the command.
I am not sure how to do the same on a Windows container though.
A: instead of the ENTRYPOINT you can try putting something like this in your Dockerfile:
CMD C:\init\init.bat && cmd
A: I've a similar case. Here's my dockerfile
FROM microsoft/dotnet-framework-build:4.7.1
run mkdir c:\WorkSpace
copy ./CreateFolder.bat /WorkSpace
CMD c:\\WorkSpace\\CreateFolder.bat
ENTRYPOINT POWERSHELL Write-Host Folder created ; \
while ($true) { Start-Sleep -Seconds 3600 }
This isn't working. The container stays up but the folder isn't created.
Whereas if i do the opposite :
FROM microsoft/dotnet-framework-build:4.7.1
run mkdir c:\WorkSpace
copy ./CreateFolder.bat /WorkSpace
ENTRYPOINT c:\\WorkSpace\\CreateFolder.bat
CMD POWERSHELL Write-Host Folder created ; \
while ($true) { Start-Sleep -Seconds 3600 }
---- this is working. The container stays up and the folder is created.
| |
doc_23538019
|
A: I found this solution valid. Although, it doesn't provide you a shell.
*
*Create a normal application with test unit.
*Make sure maven/gradle is installed and added to PATH
*Inside the regular app, create your methods that utilize the code below:
// windows: "cmd","/c"
// unix: "/bin/sh","-c"
ProcessBuilder builder = new ProcessBuilder("cmd","/c","mvn","test");
builder.redirectErrorStream(true);
Process p = builder.start();
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
// code to process test result from reader
// I only need to send this back to as API response. So there were no other impl
| |
doc_23538020
|
This is useful when the form is long and user cannot finish in one-sitting. The mixin code below comes directly from prodjango book by Marty Alchin. I have commented in the code where the error comes which is the POST method in mixin. Detailed error description below.
From the traceback, I think the error comes from these two calls self.get_form(form_class) and get_form_kwargs. but I have no idea how to fix this.
Here is the view:
class ArticleCreateView(PendFormMixin, CreateView):
form_class = ArticleForm
model = Article
template_name = "article_create.html"
success_url = '/admin'
Here is the mixin:
from django.views.generic.edit import FormView
from pend_form.models import PendedForm, PendedValue
from hashlib import md5
class PendFormMixin(object):
form_hash_name = 'form_hash'
pend_button_name = 'pend'
def get_form_kwargs(self):
"""
Returns a dictionary of arguments to pass into the form instantiation.
If resuming a pended form, this will retrieve data from the database.
"""
form_hash = self.kwargs.get(self.form_hash_name)
print "form_hash", form_hash
if form_hash:
import_path = self.get_import_path(self.get_form_class())
return {'data': self.get_pended_data(import_path, form_hash)}
else:
print "called"
# print super(PendFormMixin, self).get_form_kwargs()
return super(PendFormMixin, self).get_form_kwargs()
def post(self, request, *args, **kwargs):
"""
Handles POST requests with form data. If the form was pended, it doesn't follow
the normal flow, but saves the values for later instead.
"""
if self.pend_button_name in self.request.POST:
print "here"
form_class = self.get_form_class()
print form_class
form = self.get_form(form_class)
#the error happens here. below print is not executed
# print "form is ", form
self.form_pended(form)
else:
super(PendFormMixin, self).post(request, *args, **kwargs)
# Custom methods follow
def get_import_path(self, form_class):
return '{0}.{1}'.format(form_class.__module__, form_class.__name__)
def get_form_hash(self, form):
content = ','.join('{0}:{1}'.format(n, form.data[n]) for n in form.fields.keys())
return md5(content).hexdigest()
def form_pended(self, form):
import_path = self.get_import_path(self.get_form_class())
form_hash = self.get_form_hash(form)
print "in form_pended"
pended_form = PendedForm.objects.get_or_create(form_class=import_path,
hash=form_hash)
for name in form.fields.keys():
pended_form.data.get_or_create(name=name, value=form.data[name])
return form_hash
def get_pended_data(self, import_path, form_hash):
data = PendedValue.objects.filter(import_path=import_path, form_hash=form_hash)
return dict((d.name, d.value) for d in data)
Error:
'ArticleCreateView' object has no attribute 'object'
Exception Location: /Users/django/django/lib/python2.7/site-packages/django/views/generic/edit.py in get_form_kwargs, line 125
/Users/pend_form/forms.py in post
form = self.get_form(form_class)
/Users/django/django/lib/python2.7/site-packages/django/views/generic/edit.py in get_form_kwargs
kwargs.update({'instance': self.object})
A: self.object is assigned in post, so if you override post,
don't expect self.object ot be assigned, before you call super(...).post(...)
A: If you look at the definition of django's CreateView, or its parent BaseCreateView, you'll see that all it does is assigns self.object = None before calling super class methods which define the actual form behavior. That's because it's a CreateView - no object to edit could possibly exist.
Since your mixin overrides this behavior, the rest of the machinery fails when it expects self.object to exist as None.
Add self.object = None to the first line of your def post method.
| |
doc_23538021
|
/*
* print numbers for ticks
* convert number to 2 decimal places except fractions less than 0.005
* negative numbers ok
*/
printn(n)
double n;
{
register char *fmt, *s, *ss;
double absn;
short sign;
sign = n<0. ? -1 : 1;
absn = n<0. ? -n : n;
if (absn < 0.0000001) absn = 0.;
/* if less than 0.005 then dynamically change the format */
PPA[Phh*6)'sn < 0.005 && absn != 0.0) {
short dec = 2;
double nn = absn;
while (nn < 0.005) {
nn =* 10.;
dec++;
}
fmt = "%-0.2f";
fmt[4] = '0' + dec;
s = printb(fmt, sign*absn);
} else
s = printb("%-0.2f", sign*absn);
/* clean out trailing zeroes/blanks/decimal point */
for (ss = s; *ss; ++ss);
while (*--ss == '0' || *ss == ' ') *ss = 0;
if (*ss == '.') *ss = 0;
return(s);
}
Now I believe
PPA[Phh*6)'sn < 0.005 && absn != 0.0) {
perhaps due to some text conversion error should be:
if (n < 0.005 && absn != 0.0) {
but I'm also getting an "Indirection requires pointer operand ('double' invalid)" on:
nn =* 10.;
Any help would be greatly appreciated.
A: nn *= 10. will multiply nn by 10
nn = *10. will try to dereference 10., which is invalid (being a double), like the error says.
Regarding indirection , the first search engine hit says:
The unary indirection operator (*) dereferences a pointer; that is, it converts a pointer value to an l-value. The operand of the indirection operator must be a pointer to a type. The result of the indirection expression is the type from which the pointer type is derived
In your case, the operand in 10., a double.
| |
doc_23538022
|
// Query 1
$stmt = $db->prepare("UPDATE table_1 SET name=? WHERE somthing=?");
$stmt->bindValue(1, $name, PDO::PARAM_STR);
$stmt->bindValue(2, $something, PDO::PARAM_STR);
$stmt->execute();
// Query 2 (right after the above)
$stmt = $db->prepare("UPDATE table_2 SET another_column=? WHERE id=?");
$stmt->bindValue(1, $another_column, PDO::PARAM_STR);
$stmt->bindValue(2, $id, PDO::PARAM_INT);
$stmt->execute();
I know it is ok for the first lines ($stmt = $db->...), my doubt is about binding values. For example if I forget to bind something in first query will my query use the next binding in second query (or vice versa)? or everything is reset after execute()?
Which one is a better practice?
*
*Using same variable to avoid mistakes (e.g. Always $stmt)
*Using different variables
A: Using different variables makes it easier to debug, however I do this occasionally because it is easier to type-hint a single statement.
my doubt is about binding values.
each db->prepare() returns a brand new \PDOStatement so there is no issue about binding values.
A: In cases like this, different statements used in the same scope, I choose more specific names for the statements.
So in your case I would name them $stmtUpdTable1 and $stmtUpdTable2 or something along those lines.
Since I can't comment other answers: I think it is unnecessary to unset variables which are no longer used, the garbage collector will do his job. No need to make the code messy
A: I would prefer to unset $stmt after each query. So I don't have to worry about all the things which you have mentioned above.
// Query 1
$stmt = $db->prepare("UPDATE table_1 SET name=? WHERE somthing=?");
$stmt->bindValue(1, $name, PDO::PARAM_STR);
$stmt->bindValue(2, $something, PDO::PARAM_STR);
$stmt->execute();
unset($stmt);
// Query 2 (right after the above)
$stmt = $db->prepare("UPDATE table_2 SET another_column=? WHERE id=?");
$stmt->bindValue(1, $another_column, PDO::PARAM_STR);
$stmt->bindValue(2, $id, PDO::PARAM_INT);
$stmt->execute();
Also its a good practice to unset variable which are not required any more.
| |
doc_23538023
|
I want to modify the functionality of one of this component methods, in particular handleDrag().
So I create my ExtendedLibrary module with the following code:
var LibraryComponent = require('libraryComponent');
LibraryComponent.prototype.handleDrag = function() {
console.log("I'm the NEW handleDrag method.");
}
LibraryComponent.prototype.render = function() {
console.log("I'm the NEW render method.");
}
module.exports = LibraryComponent;
As I understand changing the prototype of a creator object should change all its instances __proto__ atribute.
Into my mounted LibraryComponent, If I access:
this.__proto__.handleDrag() //I'm the NEW handleDrag method.
this.handleDrag() //I'm the OLD handleDrag method.
Why?
By contrast:
this.prototype.render() //I'm the NEW render method.
this.render() //I'm the NEW render method. (Accessing the __proto__ method too).
How can I do to override handleDrag definitely?
I tryied with class ExtendedLibrary extends LibraryComponent {...} too and the problem is the same (But I prefer not to include ES6 at all in my project.)
A: If you cannot/don't want to use ES6 one approach is to use composition. Just wrap the LibraryComponent with your own Component and use a ref to access/override a special method.
var Wrapper = React.createClass({
libLoaded: function(libComponent) {
if (libComponent) {
libComponent.onDrag = this.onDrag;
}
},
onDrag: function() {
return "Hello drag";
},
render: function() {
return <LibraryComponent ref={this.libLoaded}/>;
}
});
ReactDOM.render(
<Wrapper/>,
document.getElementById('container')
);
https://jsfiddle.net/2n0x666d/3/
| |
doc_23538024
|
Program is supposed to create 2 subprocesses. Each one of them will send set number of respectively SIGUSR1 and SIGUSR2 to parent. 5 and 8 times respectively.
To simplify, after many crashes causing my system to log out, closing all programs and forcing me to log in, i'm printing information about parent process instead. The goal is to replace those prints by
kill(getppid(),SIGUSR1) // and SIGUSR2 for second child process.
Current child work function:
void childWork(int loopCounter, int sigNum)
{
for(; loopCounter>0; loopCounter--)
{
if(SIGUSR1==sigNum) //kill(getppid(),SIGUSR1);
printf("[%d] sending SIGUSR1 to %d\n", getpid(),getppid());
else if(SIGUSR2 == sigNum) //kill(getppid(), SIGUSR2);
printf("[%d] sending SIGUSR2 to %d\n", getpid(),getppid());
}
}
Here is the zombie handling function for cleanup:
void handleZombie(int sig) {
while (1) {
pid_t pid = waitpid(0, NULL, WNOHANG);
if (pid < 0) {
if (errno == ECHILD)
return;
printf("Error, cleaning\n");
}
if (pid == 0)
return;
}
And finally main:
int main(int argc, char** argv)
{
printf("[%d] PARENT started! My parent: %d\n", getpid(), getppid());
childrenLeft=2;
setHandler(handleZombie,SIGCHLD);
setHandler(sigHandler1, SIGUSR1);
setHandler(sigHandler2, SIGUSR2);
int i;
for(i=1;i<=childrenLeft;i++)
{
pid_t pid = fork();
if(pid < 0)
printf("Error - fork\n");
if(pid==0)
if(i==1)
{
printf("[%d] child created!\n", getpid());
childWork(5,SIGUSR1);
}
if(i==2)
{
childWork(8, SIGUSR2);
printf("[%d] child created!\n", getpid());
}
exit(EXIT_SUCCESS);
}
printf("Work finished, final numbers:\nSIGUSR1 received: %d\nSIGUSR2 received: %d\n",sig1Count,sig2Count);
while (wait(NULL) > 0)
continue;
printf("[PARENT=%d] terminates\n", getpid());
return EXIT_SUCCESS;
}
Current issue is actually handling the parent process. For reason i do not understand, my second child isn't created. What more, the parent being printed is out of the blue.
[6025] PARENT started! My parent: 1300
[6026] child created!
[6026] sending SIGUSR1 to 6025
[6026] sending SIGUSR1 to 6025
[6026] sending SIGUSR1 to 6025
[6026] sending SIGUSR1 to 30404
[6026] sending SIGUSR1 to 30404
This is the complete output. Please help me understand what is going on here...
A: Note that you don't report that child 2 is created until after childWork() returns.
However, your fundamental problem is the lack of statement grouping braces after if (pid == 0) which means that the exit(EXIT_SUCCESS): after the two tests if (i == 1) and if (i == 2); causes the parent to exit immediately after launching the first child.
int main(int argc, char** argv)
{
printf("[%d] PARENT started! My parent: %d\n", getpid(), getppid());
childrenLeft=2;
setHandler(handleZombie,SIGCHLD);
setHandler(sigHandler1, SIGUSR1);
setHandler(sigHandler2, SIGUSR2);
int i;
for(i=1;i<=childrenLeft;i++)
{
pid_t pid = fork();
if(pid < 0)
printf("Error - fork\n");
if(pid==0)
{ // Primary bug: braces missing
if(i==1)
{
printf("[%d] child created!\n", getpid());
childWork(5,SIGUSR1);
}
if(i==2)
{
printf("[%d] child created!\n", getpid()); // Moved before childWork()
childWork(8, SIGUSR2);
}
exit(EXIT_SUCCESS); // Only executed by children
} // Primary bug: missing braces
}
printf("Work finished, final numbers:\nSIGUSR1 received: %d\nSIGUSR2 received: %d\n",sig1Count,sig2Count);
while (wait(NULL) > 0)
continue;
printf("[PARENT=%d] terminates\n", getpid());
return EXIT_SUCCESS;
}
This is the bare minimum fixing needed; there are many other changes that could and perhaps should be made.
| |
doc_23538025
|
var dataString="hy";
var htmlnew="<input type='checkbox' name='formDoor' value='A' class='list' enable='true'>Check 1";
alert(htmlnew);
$(".list").change(function()
{
$("#regTitle").append(htmlnew);
});
});
The above is which i used when each time i check the checkbox with class list. i get a new one in #regTitle div, but the problem i am facing is the newly generated check boxes are not able to checked,can you guys tell me whats the problem?
A: You should delegate event handling with on() so that everything works also on newly added elements (jQuery > 1.7);
$("body").on("change", ".list", function()
{
$("#regTitle").append(htmlnew);
});
if you use an older version of jQuery use delegate or live
$(document).delegate(".list","change", function()
{
$("#regTitle").append(htmlnew);
});
$(".list").live("change", function()
{
$("#regTitle").append(htmlnew);
});
A: Your checkbox's change event does not attach to the dynamically generated elements. You will have to delegate the bind. Using .on() is very good for this purpose.
$("body").on("change", ".list",function() {
$("#regTitle").append(htmlnew);
});
| |
doc_23538026
|
When the form loads, the checkbox state is shown as grayed-out (a solid blue square). When the cell of checkbx is focused, I can set the checkbox state as true or false (the stated value is correctly reflected in the datatable).
However, when the focus of the checkbox cell is lost and the focus transfers to the next cell, the visual style of the checkbox is reverted to the previous style (a solid square), although the value is correctly reflected in datatable and it is not changed.
How can I configure the check box column to show the real state of the checkbox?
NOTE: in the above picture, the current state of the first row checkbox is checked (true), the second row is unchecked (false). The correct state is only shown in the third row which is focused.
A: The problem has been solved by following procedure:
The ColumnEdit property in the grid view must not be set as RepositoryItemCheckedEdit. The ColumnEdit must be none and for showing the checkbox in that column, the data type of corresponding column in datatable must be specified as boolean.
Just that simple.
| |
doc_23538027
|
Controller folder has an index action and view has an index page.
My module name is employee, containing an action logout. After logout I want to redirect the index page in my parent project, but now it goes to the index page of the module employee.
Anybody help me?
Module employee controller action
public function actionLogout() {
Yii::app()->user->logout();
//$this->redirect(Yii::app()->homeUrl);
$this->redirect(array('site/index',));
}
A: $this->redirect('/site/index');
or
$this->redirect('/');
A: right syntax is
$this->redirect(array('controller/action'), 'id'=>$id);
So in your example, $this->redirect(array('site/index',)); you must redirect Site controller's index action. First check Url when redirect, check urlManager in config/main.php.
| |
doc_23538028
|
from io import StringIO
import requests
import json
import pandas as pd
# @hidden_cell
# This function accesses a file in your Object Storage. The definition contains your credentials.
# You might want to remove those credentials before you share your notebook.
def get_object_storage_file_with_credentials_xxxxxx(container, filename):
"""This functions returns a StringIO object containing
the file content from Bluemix Object Storage."""
url1 = ''.join(['https://identity.open.softlayer.com', '/v3/auth/tokens'])
data = {'auth': {'identity': {'methods': ['password'],
'password': {'user': {'name': 'member_xxxxxx','domain': {'id': 'xxxxxxx'},
'password': 'xxxxx),(xxxxx'}}}}}
headers1 = {'Content-Type': 'application/json'}
resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']=='dallas'):
url2 = ''.join([e2['url'],'/', container, '/', filename])
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
resp2 = requests.get(url=url2, headers=headers2)
return StringIO(resp2.text)
# Your data file was loaded into a StringIO object and you can process the data.
# Please read the documentation of pandas to learn more about your possibilities to load your data.
# pandas documentation: http://pandas.pydata.org/pandas-docs/stable/io.html
data_1 = get_object_storage_file_with_credentials_20e75635ab104e58bd1a6e91635fed51('DefaultProjectxxxxxxxx', 'train.zip')
This gives an output:
data_1
<_io.StringIO at 0x7f8a288cd3a8>
But when I try to use Zipfile to unzip it, I'm greeted with the following error:
from zipfile import ZipFile
file = ZipFile(data_1)
BadZipFile: File is not a zip file
How to I access the file in IBM DSX?
A: You can use the function shown below to save a zip file from object storage. The credentials argument is the dictionary inserted to code in a DSX notebook. This function is also on gist
import zipfile
from io import BytesIO
import requests
import json
import pandas as pd
def get_zip_file(credentials):
url1 = ''.join(['https://identity.open.softlayer.com', '/v3/auth/tokens'])
data = {'auth': {'identity': {'methods': ['password'], 'password': {'user': {'name': credentials['username'],'domain': {'id': credentials['domain_id']}, 'password': credentials['password']}}}}}
headers1 = {'Content-Type': 'application/json'} resp1 = requests.post(url=url1, data=json.dumps(data), headers=headers1)
resp1_body = resp1.json()
for e1 in resp1_body['token']['catalog']:
if(e1['type']=='object-store'):
for e2 in e1['endpoints']:
if(e2['interface']=='public'and e2['region']==credentials['region']): url2 = ''.join([e2['url'],'/', credentials['container'], '/', credentials['filename']]) s_subject_token = resp1.headers['x-subject-token'] headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
s_subject_token = resp1.headers['x-subject-token']
headers2 = {'X-Auth-Token': s_subject_token, 'accept': 'application/json'}
r = requests.get(url=url2, headers=headers2, stream=True)
z = zipfile.ZipFile(BytesIO(r.content))
z.extractall()# save zip contents to disk
return(z)
z = get_zip_file(credentials)
A: Extract your uploaded zip file with the code below: use pwd to get your working directory and setup your path folders.
from io import BytesIO
import zipfile
zip_ref = zipfile.ZipFile(BytesIO(streaming_body_1.read()), 'r')
file_paths = zip_ref.namelist()
for path in file_paths:
zip_ref.extract(path)
| |
doc_23538029
|
I can use cmake to build the target(on my own windows 7), which needs to setup environment variables to specify library path include node-gyp and mysql. I got the mysql path(from appveyor doc), please correct me if I'm wrong.
But I don't know how to set node-gyp dir in appveyor environment, which is located in ~/.node-gyp windows equivalent path. I tried below script, and it errored out at NODE_GYP_DIR line.
environment:
# set variables
NODE_GYP_VER: 0.12.7
NODE_GYP_DIR: %HOMEDRIVE%%HOMEPATH%\.node-gyp
LIBMYSQL_INCLUDE_DIR: C:\Program Files\MySql\MySQL Server 5.6\include
LIBMYSQL_LIBRARY: C:\Program Files\MySql\MySQL Server 5.6\lib
My question is, can I assume ~/.node-gyp windows equivalent path exists? How to set the path to my environment variable for my cmake? Thanks!
A: Use %USERPROFILE% instead:
environment:
# set variables
NODE_GYP_VER: 0.12.7
NODE_GYP_DIR: '%USERPROFILE%\.node-gyp'
LIBMYSQL_INCLUDE_DIR: C:\Program Files\MySql\MySQL Server 5.6\include
LIBMYSQL_LIBRARY: C:\Program Files\MySql\MySQL Server 5.6\lib
| |
doc_23538030
|
error
Error:(89, 13) error: cannot find symbol class SectionsPagerAdapter
My main activity looks like this
package com.madchallenge2016edwindaniel.upbirdwatchers;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.provider.MediaStore;
import android.content.Intent;
import android.content.SharedPreferences;
import android.net.Uri;
import android.support.annotation.NonNull;
import android.support.design.widget.NavigationView;
import android.support.design.widget.TabLayout;
import android.support.design.widget.FloatingActionButton;
import android.support.design.widget.Snackbar;
import android.support.v4.app.FragmentStatePagerAdapter;
import android.support.v4.app.FragmentManager;
import android.support.v4.app.FragmentTransaction;
import android.support.v4.view.GravityCompat;
import android.support.v4.view.PagerAdapter;
import android.support.v4.widget.DrawerLayout;
import android.support.v7.app.ActionBarDrawerToggle;
import android.support.v7.app.AlertDialog;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.widget.Toolbar;
import android.support.v4.app.Fragment;
import android.support.v4.app.FragmentPagerAdapter;
import android.support.v4.view.ViewPager;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.CompoundButton;
import android.widget.Switch;
import android.widget.TextView;
import com.google.android.gms.appindexing.Action;
import com.google.android.gms.appindexing.AppIndex;
import com.google.android.gms.auth.api.Auth;
import com.google.android.gms.auth.api.signin.GoogleSignInAccount;
import com.google.android.gms.common.ConnectionResult;
import com.google.android.gms.common.api.GoogleApiClient;
import com.google.android.gms.common.api.ResultCallback;
import com.google.android.gms.common.api.Status;
import com.google.android.gms.auth.api.signin.GoogleSignInOptions;
import com.google.android.gms.auth.api.signin.GoogleSignInResult;
public class MainActivity extends AppCompatActivity implements
GoogleApiClient.ConnectionCallbacks,
GoogleApiClient.OnConnectionFailedListener {
//Defining Variables
public static final int REQUEST_CAPTURE = 1;
ImageView reslut_photots;
private Toolbar toolbar;
private NavigationView navigationView;
private DrawerLayout drawerLayout;
private Button btnLogout;
private SharedPreferences preferenceSettings;
private SharedPreferences.Editor preferenceEditor;
private static final int PREFERENCE_MODE_PRIVATE = 0;
/**
* The {@link PagerAdapter} that will provide
* fragments for each of the sections. We use a
* {@link FragmentPagerAdapter} derivative, which will keep every
* loaded fragment in memory. If this becomes too memory intensive, it
* may be best to switch to a
* {@link FragmentStatePagerAdapter}.
*/
private SectionsPagerAdapter mSectionsPagerAdapter;
/**
* The {@link ViewPager} that will host the section contents.
*/
private ViewPager mViewPager;
private GoogleApiClient mGoogleApiClient;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Button click = (Button) findViewById(R.id.BtnCamera);
reslut_photots = (ImageView) findViewById(R.id.BtnCamera);
if (!hasCamera()) {
//click.setEnabled(false);
}
GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
.requestEmail()
.build();
mGoogleApiClient = new GoogleApiClient.Builder(this)
.enableAutoManage(this /* FragmentActivity */, this /* OnConnectionFailedListener */)
.addApi(Auth.GOOGLE_SIGN_IN_API, gso)
.addApi(AppIndex.API).build();
//Background
getWindow().setBackgroundDrawableResource(R.drawable.nature_background);
/** //Logout button listener en logout
btnLogout = (Button) findViewById(R.id.Logout);
btnLogout.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v){
Auth.GOOGLE_SIGN_IN_API.signOut(mGoogleApiClient).setResultCallback(
new ResultCallback<Status>(){
@Override public void onResult(Status status){
}
});
}
});
*/
// Initializing Toolbar and setting it as the actionbar
toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
//Initializing NavigationView
navigationView = (NavigationView) findViewById(R.id.navigation_view);
//Setting Navigation View Item Selected Listener to handle the item click of the navigation menu
navigationView.setNavigationItemSelectedListener(new NavigationView.OnNavigationItemSelectedListener() {
// This method will trigger on item Click of navigation menu
@Override
public boolean onNavigationItemSelected(MenuItem menuItem) {
//Checking if the item is in checked state or not, if not make it in checked state
if (menuItem.isChecked()) menuItem.setChecked(false);
else menuItem.setChecked(true);
//Closing drawer on item click
drawerLayout.closeDrawers();
//Check to see which item was being clicked and perform appropriate action
switch (menuItem.getItemId()) {
case R.id.item1:
Intent intent = new Intent(getApplicationContext(), MapActivity.class);
startActivity(intent);
return true;
case R.id.item2:
return true;
default:
return true;
}
}
});
// Initializing Drawer Layout and ActionBarToggle
drawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
ActionBarDrawerToggle actionBarDrawerToggle = new ActionBarDrawerToggle(this,drawerLayout,toolbar,R.string.openDrawer, R.string.closeDrawer){
@Override
public void onDrawerClosed(View drawerView) {
// Code here will be triggered once the drawer closes as we dont want anything to happen so we leave this blank
super.onDrawerClosed(drawerView);
}
@Override
public void onDrawerOpened(View drawerView) {
// Code here will be triggered once the drawer open as we dont want anything to happen so we leave this blank
super.onDrawerOpened(drawerView);
}
};
//Setting the actionbarToggle to drawer layout
drawerLayout.setDrawerListener(actionBarDrawerToggle);
//calling sync state is necessay or else your hamburger icon wont show up
actionBarDrawerToggle.syncState();
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
// Create the adapter that will return a fragment for each of the three
// primary sections of the activity.
mSectionsPagerAdapter = new SectionsPagerAdapter(getSupportFragmentManager());
// Set up the ViewPager with the sections adapter.
mViewPager = (ViewPager) findViewById(R.id.container);
mViewPager.setAdapter(mSectionsPagerAdapter);
TabLayout tabLayout = (TabLayout) findViewById(R.id.tabs);
tabLayout.setupWithViewPager(mViewPager);
tabLayout.setTabMode(TabLayout.MODE_SCROLLABLE);
}
public boolean hasCamera() {
return getPackageManager().hasSystemFeature(PackageManager.FEATURE_CAMERA_ANY);
}
public void luanchCamera (View v) {
Intent i = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(i, REQUEST_CAPTURE);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == REQUEST_CAPTURE && resultCode == RESULT_OK) {
Bundle extras = data.getExtras();
Bitmap photo = (Bitmap) extras.get("data");
reslut_photots.setImageBitmap(photo);
}
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
switch (item.getItemId()) {
case android.R.id.home:
drawerLayout.openDrawer(GravityCompat.START);
return true;
}
return super.onOptionsItemSelected(item);
}
@Override
public void onConnectionFailed(@NonNull ConnectionResult connectionResult) {
}
@Override
public void onConnected(Bundle bundle) {
}
@Override
public void onConnectionSuspended(int i) {
}
@Override
public void onStart() {
super.onStart();
// ATTENTION: This was auto-generated to implement the App Indexing API.
// See https://g.co/AppIndexing/AndroidStudio for more information.
mGoogleApiClient.connect();
Action viewAction = Action.newAction(
Action.TYPE_VIEW, // TODO: choose an action type.
"Main Page", // TODO: Define a title for the content shown.
// TODO: If you have web page content that matches this app activity's content,
// make sure this auto-generated web page URL is correct.
// Otherwise, set the URL to null.
Uri.parse("http://host/path"),
// TODO: Make sure this auto-generated app deep link URI is correct.
Uri.parse("android-app://com.madchallenge2016edwindaniel.upbirdwatchers/http/host/path")
);
AppIndex.AppIndexApi.start(mGoogleApiClient, viewAction);
}
@Override
public void onStop() {
super.onStop();
// ATTENTION: This was auto-generated to implement the App Indexing API.
// See https://g.co/AppIndexing/AndroidStudio for more information.
Action viewAction = Action.newAction(
Action.TYPE_VIEW, // TODO: choose an action type.
"Main Page", // TODO: Define a title for the content shown.
// TODO: If you have web page content that matches this app activity's content,
// make sure this auto-generated web page URL is correct.
// Otherwise, set the URL to null.
Uri.parse("http://host/path"),
// TODO: Make sure this auto-generated app deep link URI is correct.
Uri.parse("android-app://com.madchallenge2016edwindaniel.upbirdwatchers/http/host/path")
);
AppIndex.AppIndexApi.end(mGoogleApiClient, viewAction);
mGoogleApiClient.disconnect();
}
@Override
public boolean onCreateOptionsMenu (Menu menu){
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
@Override
public boolean onOptionsItemSelected (MenuItem item){
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
//noinspection SimplifiableIfStatement
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
/**
* A placeholder fragment containing a simple view.
*/
public static class PlaceholderFragment extends Fragment {
/**
* The fragment argument representing the section number for this
* fragment.
*/
private static final String ARG_SECTION_NUMBER = "section_number";
public PlaceholderFragment() {
}
/**
* Returns a new instance of this fragment for the given section
* number.
*/
public static PlaceholderFragment newInstance(int sectionNumber) {
PlaceholderFragment fragment = new PlaceholderFragment();
Bundle args = new Bundle();
args.putInt(ARG_SECTION_NUMBER, sectionNumber);
fragment.setArguments(args);
return fragment;
/** @Override public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
if (getArguments().getInt(ARG_SECTION_NUMBER)==1){
View rootView = inflater.inflate(R.layout.fragment_sub_birds_you_seen, container, false);
return rootView;
} else if (getArguments().getInt(ARG_SECTION_NUMBER)==2) {
View rootView = inflater.inflate(R.layout.fragment_sub_seed_eating, container, false);
return rootView;
} else if (getArguments().getInt(ARG_SECTION_NUMBER)==3) {
View rootView = inflater.inflate(R.layout.fragment_sub_insect_eeters, container, false);
return rootView;
}else {
View rootView = inflater.inflate(R.layout.fragment_main, container, false);
TextView textView = (TextView) rootView.findViewById(R.id.section_label);
textView.setText(getString(R.string.section_format, getArguments().getInt(ARG_SECTION_NUMBER)));
return rootView;
}
@Override public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
if (getArguments().getInt(ARG_SECTION_NUMBER) == 1) {
View rootView = inflater.inflate(R.layout.fragment_sub_birds_you_seen, container, false);
return rootView;
} else if (getArguments().getInt(ARG_SECTION_NUMBER) == 2) {
View rootView = inflater.inflate(R.layout.fragment_sub_seed_eating, container, false);
return rootView;
} else if (getArguments().getInt(ARG_SECTION_NUMBER) == 3) {
View rootView = inflater.inflate(R.layout.fragment_sub_insect_eeters, container, false);
return rootView;
} else {
View rootView = inflater.inflate(R.layout.fragment_main, container, false);
TextView textView = (TextView) rootView.findViewById(R.id.section_label);
textView.setText(getString(R.string.section_format, getArguments().getInt(ARG_SECTION_NUMBER)));
return rootView;
}
}
}
*/
}
/**
* A {@link FragmentPagerAdapter} that returns a fragment corresponding to
* one of the sections/tabs/pages.
*/
class SectionsPagerAdapter extends FragmentPagerAdapter {
public SectionsPagerAdapter(FragmentManager fm) {
super(fm);
}
@Override
public Fragment getItem(int position) {
// getItem is called to instantiate the fragment for the given page.
// Return a PlaceholderFragment (defined as a static inner class below).
return MainActivity.PlaceholderFragment.newInstance(position + 1);
}
@Override
public int getCount() {
// Show 3 total pages.
return 3;
}
@Override
public CharSequence getPageTitle(int position) {
switch (position) {
case 0:
return "Birds you have seen";
case 1:
return "Seed-eating";
case 2:
return "insect eeters";
}
return null;
}
}
}
}
my app gradle looks like this
apply plugin: 'com.android.application'
android {
compileSdkVersion 23
buildToolsVersion "23.0.1"
defaultConfig {
applicationId "com.madchallenge2016edwindaniel.upbirdwatchers"
minSdkVersion 16
targetSdkVersion 23
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
testCompile 'junit:junit:4.12'
compile 'com.android.support:appcompat-v7:23.1.1'
compile 'com.android.support:design:23.1.1'
compile 'com.google.android.gms:play-services-auth:9.2.1'
compile 'de.hdodenhof:circleimageview:1.3.0'
compile 'com.squareup.picasso:picasso:2.5.2'
compile 'com.google.android.gms:play-services-appindexing:9.2.1'
compile 'com.android.support:support-v4:23.1.1'
compile 'com.android.support:support-v13:+'
}
apply plugin: 'com.google.gms.google-services'
A: You have two onOptionsItemSelected methods defined in MainActivity. Remove one of those and the class SectionsPagerAdapter will be found by the compiler.
| |
doc_23538031
|
src/SwipeBundle/Service
Inside of this directory I have class called.
RSA.php
This class has
namespace SwipeBundle\Service;
require_once __DIR__ . 'RSA/Crypt/RSA.php';
require_once __DIR__ . 'RSA/File/X509.php';
class RSA
{
public function privateKey() {
require_once __DIR__ . 'RSA/Certificates/private.txt';
}
public function certificates() {
return require_once __DIR__ . 'RSA/Certificates/pinew.cer';
}
}
and the files directory are
src/SwipeBundle/Service/RSA/Certificates/pinew.cer
src/SwipeBundle/Service/RSA/Certificates/private.txt
src/SwipeBundle/Service/RSA/Crypt/RSA.php
src/SwipeBundle/Service/RSA/File/X509.php
I want to load the classes of this service class to my Controller, like so.
Controller
use SwipeBundle\Service;
class BillsPayController extends Controller
{
public function indexAction() {
$rsa = new \Crypt_RSA();
$hash = new \Crypt_Hash('sha1');
$x509 = new \File_X509();
$privatekey = file_get_contents(RSA()->privateKey());
$x509->loadX509(file_get_contents(RSA()->certificates()));
}
}
I tried also using this one.
use SwipeBundle\Service\RSA;
class BillsPayController extends Controller
{
public function indexAction() {
$a = new RSA();
$a->privateKey();
}
}
Error I encountered.
Attempt result 1: Attempted to load class "Crypt_RSA" from the global namespace.
Did you forget a "use" statement?
Attempt result 2: Compile Error: main(): Failed opening required '/Users/jaysonlacson/Sites/contactless/src/SwipeBundle/ServiceRSA/Crypt/RSA.php' (include_path='.:')
A: I think you are missing a forward slash like this:
require_once __DIR__ . '/RSA/Crypt/RSA.php';
require_once __DIR__ . '/RSA/File/X509.php';
I'm just guessing, but can you try that?
| |
doc_23538032
|
public static void Login(String username, String password) {
final WebClient webClient = new WebClient(BrowserVersion.CHROME);
try {
final HtmlPage page = webClient.getPage "https://www.linkedin.com/secure/login");
final HtmlForm form = page.getForms().get(0);
final HtmlSubmitInput button = form.getInputByName("signin");
final HtmlTextInput emailBtn = form.getInputByName("session_key");
final HtmlPasswordInput passBtn = form.getInputByName("session_password");
emailBtn.setValueAttribute(username);
passBtn.setValueAttribute(password);
final HtmlPage page2 = button.click();
System.out.println(page2.getWebResponse().getContentAsString());
} catch (Exception ex ){
ex.printStackTrace();
}
}
Could you please assist in finding what is wrong with my code and how do I navigate to another page after I login.
A: Try this...
Here i found solution for linkedin login......
try {
String url = "https://www.linkedin.com/uas/login?goback=&trk=hb_signin";
final WebClient webClient = new WebClient();
webClient.getOptions().setJavaScriptEnabled(false);
webClient.getOptions().setCssEnabled(false);
final HtmlPage loginPage = webClient.getPage(url);
//Get Form By name
final HtmlForm loginForm = loginPage.getFormByName("login");
final HtmlSubmitInput button = loginForm.getInputByName("signin");
final HtmlTextInput usernameTextField = loginForm.getInputByName("session_key");
final HtmlPasswordInput passwordTextField = loginForm.getInputByName("session_password");
usernameTextField.setValueAttribute(userName);//your Linkedin Username
passwordTextField.setValueAttribute(password);//Your Linkedin Password
final HtmlPage responsePage = button.click();
String htmlBody = responsePage.getWebResponse().getContentAsString();
System.out.println(htmlBody);
} catch (Exception ex) {
ex.printStackTrace();
}
| |
doc_23538033
|
glBindFramebuffer(GL_FRAMEBUFFER, workFrame) ;
glUseProgram(ShaderPrograms.HiZData.theProgram);
glActiveTexture(GL_TEXTURE0 + ShaderPrograms.HiZData.Cache.lastMipBindingIndex);
glBindTexture(GL_TEXTURE_2D, worldDepth);
glDepthFunc(GL_ALWAYS);
for(int i = 1; i<numLevels; i++) {
glViewport(0, 0, depthDims.get(i).x, depthDims.get(i).y);
// bind next level for rendering but first restrict fetches only to previous level
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, i - 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, i - 1);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, worldDepth, i);
// draw full-screen quad
Quad2D.renderFullScreenNDCquad();
}
glViewport(0, 0, Display.getWidth(), Display.getHeight());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
I can comment out the line that renders the full screen quad, which is the call that does glDraw and actually does the writing to the miplevels, and everything will be correct and normal.
Any help please?
EDIT: I can fix it if comment out these lines:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
why?
| |
doc_23538034
|
A: Here is an Example in DemoClass variables are private which can not be accessed directly. You can only get these variables with getters and setters
public class DemoClass {
// you can not get these variable directly
private String stringValue;
private int integerValue;
public DemoClass(String stringValue, int integerValue) {
this.stringValue = stringValue;
this.integerValue = integerValue;
}
public void setStringValue(String stringValue) {
this.stringValue = stringValue;
}
public void setIntegerValue(int integerValue) {
this.integerValue = integerValue;
}
public String getStringValue() {
return stringValue;
}
public int getIntegerValue() {
return integerValue;
}
}
class Main {
public static void main(String[] args) {
DemoClass demoClass =new DemoClass("My String Value",120);
System.out.println(demoClass.getIntegerValue());
System.out.println(demoClass.getStringValue());
}
}
A: If this is your main code then the answer would be yes, that's why we set any variable except global variables to private.
class Demo {
private String Var = "100";
void Meth(String str) {
System.out.println(str + Var);
}
}
class Main {
public static void main(String[] args) {
Demo demo1 = new Demo();
demo1.Meth("10 x 10 = ");
System.out.println(demo1.Var);//Error. This variable is set to private so it cannot be accessed.
}
}
The privacy or control of your variables can only be accesed by the superclass/control block of the variable.
| |
doc_23538035
|
The problem is that component Auth::user() isn't working in my Event.
What is the correct way to use Auth within Events?
public function __construct()
{
if(Auth::user()->role == "XXXX")
{
$candidate = count(Candidate::CountNewCandidate());
}
else
{
$candidate = count(Candidate::CountNewCandidateGroup());
}
$this->data = [ 'cCandidate' => $candidate ];
}
A: If there is no authenticated user Auth::user() will return null, so Auth::user()->role will raise the Trying to get property of non-object; Try to check if there is an authenticated user by using Auth::check() then you can check the role :
public function __construct()
{
if(auth()->check() && auth()->user()->role == "XXXX")
{
$candidate = count(Candidate::CountNewCandidate());
}
else
{
$candidate = count(Candidate::CountNewCandidateGroup());
}
$this->data = [ 'cCandidate' => $candidate ];
}
Note: I used the helper function auth().
I hope this will help you.
A: The reason Auth::user() is null in an event (particularly if it is queued), is because there is no user. The authentication data is typically stored in a session and when your even is triggered there is no session variable therefore no session data.
In the past I have passed in the user in the event, then to authenticate the user within the event handler I would call:
\Auth::attempt(['email', $event->performedBy()->email]);
This was necessary because my application had many functions tightly coupled with Authentication (instead of a UserInterface). Ideally I could have just passed in the user as a dependency.
A: It looks like you didn't import the namespace for the Auth facade. You have to add it in the declarations
use Illuminate\Support\Facades\Auth;
A: I solved it passing the user to event, through data in the routes:
Route::group(['middleware' => ['xxxx']], function () {
event(new NameEvent($data));
Route::auth();
});
| |
doc_23538036
|
Note : I don't want value of shiny output. I know document.querySelector('#table').innerHTML returns value of too
library(shiny)
ui <- fluidPage(
HTML('<script>
$( document ).on("shiny:sessioninitialized", function(event) {
Shiny.setInputValue("too", "noone");
});</script>'),
textOutput("table")
)
server <- function(input, output) {
output$table <- renderPrint(input$too)
}
shinyApp(ui,server)
A: I'm not sure if I understand the problem correctly, but nevertheless.
You are already providing the value of Shiny.setInputValue() in javascript. So why don't you declare a variable like this:
library(shiny)
ui <- fluidPage(
HTML('<script>
$(document).on("shiny:sessioninitialized", function(event) {
var inputValue = "noone"
console.log(inputValue)
Shiny.setInputValue("too", inputValue);
});</script>'),
textOutput("table")
)
server <- function(input, output) {
output$table <- renderPrint(input$too)
}
shinyApp(ui, server)
A: I think you may save it into a global variable instead:
library(shiny)
js_set <- "$(document).on('shiny:sessioninitialized', function(event) {
too = Math.random();
Shiny.setInputValue('too', too);
})"
js_get <- "$(document).on('shiny:value', function(event) {
console.log(too);
})"
ui <- fluidPage(
tags$script("var too = 0;"),
tags$script(js_set),
tags$script(js_get),
textOutput("table")
)
server <- function(input, output) {
output$table <- renderPrint({
req(input$too)
input$too
})
}
shinyApp(ui,server)
A: You can use this function to trigger an event whenever you set an input value:
function setInputValue(name, value) {
Shiny.setInputValue(name, value);
$('body').trigger('myevent', [name, value]);
}
Then you can listen to this event:
$('body').on('myevent', function(event, name, value) {
// do something with name and value
});
Example:
js <- "
function setInputValue(name, value) {
Shiny.setInputValue(name, value);
$('body').trigger('myevent', [name, value]);
}
$('body').on('myevent', function(event, name, value) {
console.log('name:', name);
console.log('value:', value);
});
$(document).on('shiny:sessioninitialized', function() {
setInputValue('too', 'hello');
});"
ui <- fluidPage(
tags$script(HTML(js)),
verbatimTextOutput("table")
)
server <- function(input, output) {
output$table <- renderPrint({
input$too
})
}
shinyApp(ui,server)
| |
doc_23538037
|
I have created the below function to create the placeholder stated before, but I'd like to also add that placeholder text right after the user clear the Entry widget.. Is it possible?
def placeholder(entry, case: str):
placeholder = f'type the {case} number...'
def focusIn():
if entry.get() == placeholder:
entry.delete(0, tk.END)
entry.config({'foreground': 'Black'})
def focusOut():
if entry.get() == '':
entry.insert(0, placeholder)
if entry.get() == placeholder:
entry.config({'foreground': 'Grey'})
if entry.get() == '':
entry.insert(0, placeholder)
if entry.get() == placeholder:
entry.config({'foreground': 'Grey'})
entry.bind('<FocusIn>', lambda event: focusIn())
entry.bind('<FocusOut>', lambda event: focusOut())
A: I think for this you would be better served by using a validate function instead of binding events. Here's a rough outline for you:
import tkinter as tk
def placeholder(data, event, widget):
if event == 'key' and data == '':
print(f'{widget}: user has cleared all data')
if event == 'key' and data != '':
print(f'{widget}: user has typed something')
if event == 'focusout':
print(f'{widget}: user has left')
if event == 'focusin':
print(f'{widget}: user has joined')
return True # critical!!
root = tk.Tk()
vcmd = root.register(placeholder), '%P', '%V', '%W'
for i in range(1,5):
ent = tk.Entry(root, validate="all", validatecommand=vcmd)
ent.pack()
root.mainloop()
| |
doc_23538038
|
form = AddSiteForm(request.user, request.POST)
if form.is_valid():
obj = form.save(commit=False)
obj.user = request.user
obj.save()
data['status'] = 'success'
data['html'] = render_to_string('site.html', locals(), context_instance=RequestContext(request))
return HttpResponse(simplejson.dumps(data), mimetype='application/json')
How do I get the currently saved object (including the internally generated id column) and pass it to the template?
Any help guys?
Mridang
A: obj is the currently saved object (created when you call form.save, and obj.id is the id. It's already passed in locals()
This all may seem obvious, but it's all I could decipher from your question.
| |
doc_23538039
|
Here is my code: http://jsfiddle.net/haenx/85mhj/
As you can see, the player will hit the enemy each time both boxes are overlapping. But the given polygon has another form and it semms it will be ignored...
I can't figure out, why. The docs from craftys are also not the best.
Crafty.init(300,300, document.querySelector('#Game'));
Crafty.e('2D, Canvas, Color, Mouse')
.attr({
x: 0,
y: 0,
w: 300,
h: 300
})
.color('#333')
.bind('MouseMove', function () {
player.x = Crafty.mousePos.x - (32 / 2);
player.y = Crafty.mousePos.y - (32 / 2);
});
Crafty.c('Enemy', {
init: function () {
this
.addComponent('2D, Canvas, Color')
.attr({
x: 134,
y: 134,
w: 32,
h: 32
})
.color('#ff0');
}
})
var enemy = Crafty.e('Enemy');
var player = Crafty.e('2D, Canvas, Color, Collision, WiredHitBox')
.attr({
x: 134,
y: 134,
w: 32,
h: 32
})
.color('#f00');
player
.collision(new Crafty.polygon([0,0], [0,6], [8, 13], [24,13], [32,6], [32,0]))
.onHit('Enemy',
function() {
enemy.color('#fff');
},
function() {
enemy.color('#ff0');
}
);
| |
doc_23538040
|
https://github.com/node-schedule/node-schedule
When I setup a scheduler that run in every 5 Minutes, If the scheduler does
not completed within 5 minutes. So my question is that then the scheduler
will start another thread or not?
Please solve my query.
Thanks.
A: Since jobs don't seem to have a mechanism to let the scheduler know they are done, jobs will be scheduled according to their scheduled time alone.
In other words: if you schedule a job to run every 5 minutes, it will be started every 5 minutes, even if the job itself takes more than 5 minutes to complete.
To clarify: this doesn't start a new thread for each job, as JS is single-threaded. If a job blocks the event loop (for instance by doing heavy calculations), it is possible for the scheduler to not be able to start a new job when its time has arrived, but blocking the event loop is not a good thing.
| |
doc_23538041
|
To make it easier to understand I will first explain the workflow.
I do receive web pages via email, that is HTML emails. The web pages contain HTML forms in such a way, once the form is complete it is sent to the proper web server (php) to store data.
I mostly use Outlook 2007 as my email client (don't say anything here, I know!!!), but for some security restrictions, IFRAMES are disabled when "opening" the email. I have circumvented this problem using a VBA script that copies the whole page content, saves it on the filesystem as a stand-alone web page and loads into the browser (Firefox).
Once the page is loaded into the browser, the address bar shows a local/filesystem URL, such as
file:///C:/Users/Bob/Desktop/outlookpage.htm
Till here no problem, works fine; now the problem:
I wished to dynamic update page contents using Ajax, using jQuery.load, however that's where the S-O-P comes in. The PHP page being loaded to dynamically update the web page is seen as running on another domain, thus being blocked.
I was wondering how to circumvent this.
A: That's not going to work because in order to bypass the same origin policy, you would need to use a proxy on the same domain, which will then communicate to the page that's handling the data on a different domain. There's no way to generate a proxy script on another user's computer (or at least, there SHOULDN'T BE A WAY). I would either just post the form normally, which will open the user's default browser, or provide a link to an online form in the email. The link should be provided anyway, in case their email client doesn't support HTML email.
| |
doc_23538042
|
I could not find any info, except for size limitation (15/10 MB respectively).
Also, if it is possible, can I use a single code base?
Thanks!
A: Flutter APK size is too big to support Android Instant Apps. App Clips are experimental (more in docs).
| |
doc_23538043
|
awk '/word/ {print NR}' file.txt | head -n 1
The purpose is to find the line number of the line on which the word 'word' first appears in file.txt.
But when I put it in a script file, it doens't seem to work.
#! /bin/sh
if [ $# -ne 2 ]
then
echo "Usage: $0 <word> <filename>"
exit 1
fi
awk '/$1/ {print NR}' $2 | head -n 1
So what did I do wrong?
Thanks,
A: Replace the single quotes with double quotes so that the $1 is evaluated by the shell:
awk "/$1/ {print NR}" $2 | head -n 1
A: In the shell, single-quotes prevent parameter-substitution; so if your script is invoked like this:
script.sh word
then you want to run this AWK program:
/word/ {print NR}
but you're actually running this one:
/$1/ {print NR}
and needless to say, AWK has no idea what $1 is supposed to be.
To fix this, change your single-quotes to double-quotes:
awk "/$1/ {print NR}" $2 | head -n 1
so that the shell will substitute word for $1.
A: You should use AWK's variable passing feature:
awk -v patt="$1" '$0 ~ patt {print NR; exit}' "$2"
The exit makes the head -1 unnecessary.
A: you could also pass the value as a variable to awk:
awk -v varA=$1 '{if(match($0,varA)>0){print NR;}}' $2 | head -n 1
Seems more cumbersome than the above, but illustrates passing vars.
| |
doc_23538044
|
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_test_db
$ cp -R . /tmp/app
$ /bin/herokuish buildpack test
-----> Java app detected
-----> Installing JDK 1.8... done
-----> Installing Maven 3.3.9... done
-----> Executing: mvn clean dependency:resolve-plugins test-compile
[INFO] Scanning for projects...
<SNIP>
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.953 s
[INFO] Finished at: 2018-04-01T17:22:11+00:00
[INFO] Final Memory: 23M/169M
[INFO] ------------------------------------------------------------------------
/tmp/buildpacks/05_buildpack-java/bin/test: line 24: mvn: command not found
A: There is a bug in heroku java buildpack < v60 that will mess with maven installation during tests execution.
It has been fixed and bundled in herokuish 0.4.1
If you re-run your autodevops now it will work.
source: https://gitlab.com/gitlab-org/gitlab-ce/issues/44980
| |
doc_23538045
|
6868 status_update_manager.cpp:177] Pausing sending status updates
6877 slave.cpp:915] New master detected at master@192.168.1.1:5050
6867 status_update_manager.cpp:177] Pausing sending status updates
6877 slave.cpp:936] No credentials provided. Attempting to register without authentication
6877 slave.cpp:947] Detecting new master
6869 slave.cpp:1217] Re-registered with master master@192.168.1.1:5050
6866 status_update_manager.cpp:184] Resuming sending status updates
6869 slave.cpp:1253] Forwarding total oversubscribed resources {}
6874 slave.cpp:4141] Master marked the agent as disconnected but the agent considers itself registered! Forcing re-registration.
6874 slave.cpp:904] Re-detecting master
6874 slave.cpp:947] Detecting new master
6874 status_update_manager.cpp:177] Pausing sending status updates
6869 status_update_manager.cpp:177] Pausing sending status updates
6871 slave.cpp:915] New master detected at master@192.168.1.1:5050
6871 slave.cpp:936] No credentials provided. Attempting to register without authentication
6871 slave.cpp:947] Detecting new master
6872 slave.cpp:1217] Re-registered with master master@192.168.1.1:5050
6872 slave.cpp:1253] Forwarding total oversubscribed resources {}
6871 status_update_manager.cpp:184] Resuming sending status updates
6871 slave.cpp:4141] Master marked the agent as disconnected but the agent considers itself registered! Forcing re-registration.
It seems to be stuck in an infinite loop. Any idea how to start fresh slave? I've tried to remove work_dir and restart mesos-slave process but without any success.
The situation was caused by accidental rename of work_dir. After restarting mesos-slave it wasn't able to reconnect nor kill running tasks. I've tried to use cleanup on slaves:
echo 'cleanup' > /etc/mesos-slave/recover
service mesos-slave restart
# after recovery finishes
rm /etc/mesos-slave/recover
service mesos-slave restart
This partially helped, but there are still many zombie tasks in Marathon, as Mesos master is not able to retrieve any information about that task. When I'm looking at metrics I found out that some slaves are marked as "inactive".
UPDATE: in master logs appears following:
Cannot kill task service_mesos-kafka_kafka.e0e3e128-ef0e-11e6-af93-fead7f32c37c
of framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon) at
scheduler-e76665b1-de85-48a3-b9fd-5e736b64a9d8@192.168.1.10:52192
because the agent cac09818-0d75-46a9-acb1-4e17fdb9e328-S10 at
slave(1)@192.168.1.1:5051 (w10.example.net) is disconnected.
Kill will be retried if the agent re-registers
after restarting current mesos-master:
Cannot kill task service_mesos-kafka_kafka.e0e3e128-ef0e-11e6-af93-fead7f32c37c
of framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon)
at scheduler-9e9753be-99ae-40a6-ab2f-ad7834126c33@192.168.1.10:39972
because it is unknown; performing reconciliation
Performing explicit task state reconciliation for 1 tasks
of framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon)
at scheduler-9e9753be-99ae-40a6-ab2f-ad7834126c33@192.168.1.10:39972
Dropping reconciliation of task service_mesos-kafka_kafka.e0e3e128-ef0e-11e6-af93-fead7f32c37c
for framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon)
at scheduler-9e9753be-99ae-40a6-ab2f-ad7834126c33@192.168.1.10:39972
because there are transitional agents
A: The split-brain situation was caused by having more than one work_dir. In most cases it might be enough to move data from the incorrect work_dir:
mv /tmp/mesos/slaves/* /var/lib/mesos/slaves/
Then force re-registration:
rm -rf /var/lib/mesos/meta/slaves/latest
service mesos-slave restart
Currently running tasks won't survive (won't be recovered). Tasks from old executors should be marked as TASK_LOST and scheduled for cleanup. Which will avoid problem with zombie tasks, that Mesos is unable to kill (because they were running in different work_dir).
If the mesos-slave is still registered as inactive, restart current Mesos master.
| |
doc_23538046
|
list_x = [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
list_y = [1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
When I plot them, the graph will look like this:
import matplotlib.pyplot as plt
plt.plot(list_x, list_y)
plt.show()
Based on these datapoints, is there a way to make the graph that looks like the one below and get its graph equation?
===========================================================
I have tried using the solution from here, and it produces a graph that is not smooth.
from scipy.interpolate import spline
import numpy as np
list_x_new = np.linspace(min(list_x), max(list_x), 1000)
list_y_smooth = spline(list_x, list_y, list_x_new)
plt.plot(list_x_new, list_y_smooth)
plt.show()
A: Here are 3 more curve smoothing options:
*
*Savitzky-Golay filter
*LOWESS smoother
*IIR filter
But first, recreate the original plot:
import matplotlib.pyplot as plt
list_x = [-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
list_y = [1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
plt.plot(list_x, list_y)
plt.show()
*
*Savitzky-Golay filter from scipy
The Savitzky-Golay technique fits subsets (windows) of adjacent points to low order polynomials using least squares.
How to apply the Savitzky-Golay filter:
from scipy.signal import savgol_filter
window = 21
order = 2
y_sf = savgol_filter(list_y, window, order)
plt.plot(list_x, y_sf)
plt.show()
The window and order parameters mean this filter is quite adaptable.
Read more about using this filter in the scipy documentation.
*LOWESS smoother from statsmodels
LOWESS (locally weighted scatterplot smoothing) is a local regression method. In my experience it is simple to tune and often gives great results.
How to apply the LOWESS smoother:
import statsmodels.api as sm
y_lowess = sm.nonparametric.lowess(list_y, list_x, frac = 0.30) # 30 % lowess smoothing
plt.plot(y_lowess[:, 0], y_lowess[:, 1])
plt.show()
It may be possible to improve the approximation by varying the frac parameter, which is the fraction of the data used when estimating each y value. Increase the frac value to increase the amount of smoothing. The frac value must be between 0 and 1.
Further details on statsmodels lowess usage.
*IIR filter from scipy
After application of the lfilter:
from scipy.signal import lfilter
n = 15 # larger n gives smoother curves
b = [1.0 / n] * n # numerator coefficients
a = 1 # denominator coefficient
y_lf = lfilter(b, a, list_y)
plt.plot(list_x, y_lf)
plt.show()
Check scipy lfilter documentation for implementation details regarding how numerator and denominator coefficients are used in the difference equations.
There are other filters in the scipy.signal package.
Care must be taken to avoid over-smoothing with all these approaches.
Additionally, some of these methods may have unexpected edge effects.
A: Because your data is approximate (i.e., it has been quantized), you want an approximating spline, not an interpolating spline.
A: One easy option that echoes the suggestion from Davis Herring would be to use a polynomial approximation for the data
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
poly = np.polyfit(list_x,list_y,5)
poly_y = np.poly1d(poly)(list_x)
plt.plot(list_x,poly_y)
plt.plot(list_x,list_y)
plt.show()
You would notice the oscillation at the right end of the plot that is not present in the original data which is an artifact of polynomial approximation.
Spline interpolation as suggested above by Davis is another good option. Varying the smoothness parameter s you can achieve different balance between smoothness and distance to the original data.
from scipy.interpolate import splrep, splev
plt.figure()
bspl = splrep(list_x,list_y,s=5)
bspl_y = splev(list_x,bspl)
plt.plot(list_x,list_y)
plt.plot(list_x,bspl_y)
plt.show()
| |
doc_23538047
|
I want to remove it, and I want to reinstall it.
I have faced a problem with finding the location of nvm file
logos1056@logos1056-Vostro-3578:~$ nvm ls
N/A
node -> stable (-> N/A) (default)
iojs -> N/A (default)
logos1056@logos1056-Vostro-3578:~$ rmdir $NVM_DIR
rmdir: failed to remove '/home/logos1056/.nvm': Directory not empty
logos1056@logos1056-Vostro-3578:~$ /home/logos1056/.nvm
bash: /home/logos1056/.nvm: Is a directory
logos1056@logos1056-Vostro-3578:~$ cd /home/logos1056/.nvm
logos1056@logos1056-Vostro-3578:~/.nvm$ rmdir $NVM_DIR
rmdir: failed to remove '/home/logos1056/.nvm': Directory not empty
A: nvm is usually installed in your home directory within a hidden folder called .nvm.
$NVM_DIR is the environment variable that contains the path to nvm.
You want to remove the directory and all its content. You can do it this way.
$ rm -rf $NVM_DIR
Then you can install nvm again.
A: Oneliner,
rm -rf $NVM_DIR ~/.npm ~/.bower
| |
doc_23538048
|
{
name: "mark"
subject: "maths"
phone: 123-456-7890
email_addresses: [ { email: "mark@example.com", is_primary: true } ]
}
My java class goes like this
public class Student {
@SerializedName("name") private String mName;
@SerializedName("subject") private String mSubject;
@SerializedName("phone") private String mPhone;
private String mEmail;
}
Is there a way for to use @SerializedName for mEmail, so that I would be able to get the email field from the first object in the email_addresses array
A: Create an innner static object and reference it that way (works for Android... (don't forget to make your object implement Parcelable)
@SerializedName("email_addresses")
private EmailAdresses mEmailAdresses;
public static class EmailAdresses {
@SerializedName("email")
private String mEmail;
@SerializedName("is_primary")
private boolean mIsPrimary;
}
A: No, there isn't. Either create your own TypeAdapter or create a POJO type for email addresses and have Student declare a field of type List of whatever that POJO type is. Provider a getter to only retrieve the first email (if there is one).
| |
doc_23538049
|
*
*i am trying to understand the complexity of map method in js.
*can you tell me which one is better one.
*since in some of my react code I see people using map method http://jsfiddle.net/jhudson8/135oo6f8/.
*not sure when to use map method and for loop.
*I am providing two codes below.
*using this example I calculated complexity https://github.com/andyttran/guide_to_algorithms#1c-analyze-line-by-line
- and found for loop has 14 operations and map method has 9 operations.
*in the coments below I have given value one for e ch operation.
*can you tell me whether I have done it correctly
for loop
var cars =[
{'toyota' : 'corolla', 'honda' : 'civic'},
{'toyota' : 'corolla1', 'honda' : 'civic1'},
{'toyota' : 'corolla2', 'honda' : 'civic2'},
{'toyota' : 'corolla3', 'honda' : 'civic3'},
];
var names = []; //names = 1, [] = 1
for(var i=0; i< cars.length; i++) { // for = 1 , var i = 1, =0 = 1, < = 1, cars.length = 1, i++ = 1
names.push(cars[i].toyota); //names = 1, push = 1, cars[i] = 1, toyota = 1
}
console.log(names); has an array access and then a printing to the console, so thats 2 operations
total = 14
map method js
var cars =[
{'toyota' : 'corolla', 'honda' : 'civic'},
{'toyota' : 'corolla1', 'honda' : 'civic1'},
{'toyota' : 'corolla2', 'honda' : 'civic2'},
{'toyota' : 'corolla3', 'honda' : 'civic3'},
];
var mapValues = cars.map(function(animal){ //mapvalues = 1, cars = 1, map= 1, function= 1
return animal.toyota; //return = 1, animal = 1, toyota = 1
});
console.log(mapValues); // has an array access and then a printing to the console, so thats 2 operations
total = 9
A: The way you count "operations" is really arbitrary: some of them will "cost" a lot more than others.
For instance, you count i++ as one operation, but someone might say it consists of 5 operations:
*
*Read the value of i
*Keep that value in memory
*Calculate i+1
*Update i with that calculated value
*Return the memorised value
... and then there still is no indication of which of these operations is more costly than the others. This way of comparing different algorithms will not bring you much.
What is important with calculating time and space complexities is the order of magnitude. Imagine cars.length is not 4, but one million. Then it really is not significant whether the operations outside the loop are 4, 6, 9, 11, ... What is important is that these operations are the same in number whether your input array is small or large. So they represent a constant number of operations, i.e. they have O(1) time complexity.
The same reasoning goes for the loop. It is about order of magnitude. If the body of the loop has 4 or 5 operations, that just means the total number of operations for the complete loop is 4n or 5n. In both cases the order of magnitude is n (as opposed to n², or nlogn, ...). That is what is important when speaking of time and space complexities.
So, in conclusion, both the old for loop and array methods like map, forEach, reduce, ... have a time complexity of O(n).
| |
doc_23538050
|
public class MethodClassProject
{
Integer Id ;
String Brand ;
String Price ;
String Size ;
String Quantity ;
String Code ;
String Color ;
String Style ;
void setId (int userId)
{
Id = userId ;
}
void setBrand (String userBrand)
{
Brand = userBrand ;
}
A: Read up on the return keyword. An example of getters and setters methods can be found at How do getters and setters work?. A setter method is a method which sets a variable in your class. Getter methods give the variable to whatever calls the method.
Getters:
In a main method:
String s = MethodClassProject.getBrand();
Would set String s to whatever you the brand name is.
Setters:
MethodProjectClass.setBrand("Diet Coke");
Would set the brand name to Diet Coke.
Together:
class MainWrapper {
public static void main(String[] args) {
MethodProjectClass.setBrand("Diet Coke");
System.out.println(MethodProjectClass.getBrand()); // Prints Diet Coke
}
}
I would also recommend simply making your variables public if you want the variable to be set without anything changing the input value.
I will leave the actual getter methods in the class to you, otherwise you will not learn.
A: You were definitely on the right track, but here's a few suggestions:
*
*Variable names in java should start with a lowercase letter
*Keep an eye on your indentation. While it's not really important to the compiler, it makes it very difficult for humans to read when the indentation isn't right.
*Watch your braces, there's a closing brace missing in the code you provided.
*Look up how to return items
*Look up the this keyword in Java, as well
| |
doc_23538051
|
Don't you have to convert it somehow to make it compatible?
As far as I understand the metadata of the two FS are different, but what happens to these different metadata?
A: A file system is actually an abstract user interface to access the data behind it. It works in the same way that you can access data from a DB through a web-page.
You acess this interface with file utilities which create, list, copy, move and delete files, and alter metadata. You'll need then some NTFS utils, ext3 utils and so on (it's not a given that they will be present).
There are several aspects that the program doing the transfer (for example, nautilus) has to deal with:
-how to deal with long names and non standard characters like blank spaces, non ASCII (normally copying fails here, so better avoid this)
-endianess (the order of storing bytes). It's not the same reading 0A0B0C0D from left to right than from right to left. Both methods are at use, but the problem is old and therefore tools can deal with it, normally.
-things like Linux permissions get compromised when copying files through file systems (when transfering the file, not just accessing them through a file server like Samba). The recipient can change them to whatever he wants, being root and all. File systems like FAT don't support security at all, so as soon as you copy the file to it the security information is simply lost. Linux OSs can apply a standard set of permissions (for example, with umask, not letting any file being executable).
A: How a file gets copied:
*
*Open old file to read.
*Open new file to write.
*Read/write bytes between files.
The file systems involved don't matter.
| |
doc_23538052
|
This is going to be updated on a public github
A simplified version of what I am trying to do
if (current_market_status > 0): #greater than 0
current_cash_required_equity = 0.3
elif (current_market_status > -0.05): #less than 5%
current_cash_required_equity = 0.25
elif (current_market_status > -0.10): #less than 10%
current_cash_required_equity = 0.20
elif (current_market_status > -0.15): #less than 15%
current_cash_required_equity = 0.15
elif (current_market_status > -0.20): #less than 20%
current_cash_required_equity = 0.10
elif (current_market_status > -0.25): #less than 25%
current_cash_required_equity = 0.05
elif (current_market_status > -0.30): #less than 30%
current_cash_required_equity = 0
This is how the algorithm works.
*
*Starts with a 10,000 portfolio with 100% cash position on Jan 1, 2000. It can only choose to buy SPY.
*Every single month, 1000 is deposited into portfolio.
*For every single day, it keeps track of current market status
*
*Essentially the market status can be anything from 0% to -50%. 0% would means it's at an all time high, and any number between 0 and -50% is how much from the most recent all time high it has dropped.
*For each day, it determines what cash position the portfolio should have based on the current market status. This could be any number between 50% and 0%.
*Repeat this each day until Dec 31, 2019
So there is 50 options, and for each option there is 50 options.
The theory is, that if the market drops. We should keep less cash, since it is discounted. So if the market is going up each day, than we keep 30% on hand or something. If it drops 10%, we might want to keep 20% on hand. If it drops 50%, we might want to keep no cash on hand and shovel 100% of our deposits into the market.
The final result for best strategy is which one has the highest CAGR or Compound Annual Growth Rate.
I am just learning to code, very new to this. So it is very likely I totally messed this up and are doing the whole loop wrong for what I want. Any help with this is much appreciated.
I sorta set out to do this by using this loop to set the 14 metrics first, prior to running the whole script for the 20 years. So that each iteration of the 20 years has their own set of 7 possible market statuses, and how much cash should be held based on that.
status = np.arange(0, -0.50, -0.01)
req = np.arange(0.5, -0.01, -1)
# for every fixed market status, go trough every single cash requirment.
# for every single market status, try every single cash req
for first in range(len(req)):
first_req = round(req[first], 2) # this is the 1st element, rounded to 2 digit
for second in range(first, len(req)):
second_req = round(req[second], 2) # same with 2nd element
for third in range(second, len(req)):
third_req = round(req[third], 2)
for fourth in range(third, len(req)):
fourth_req = round(req[fourth], 2)
for fifth in range(fourth, len(req)):
fifth_req = round(req[fifth], 2)
for sixth in range(fifth, len(req)):
sixth_req = round(req[fifth], 2)
for seventh in range(sixth, len(req)):
seventh_req = round(req[sixth], 2)
for first in range(len(status)):
first_status = round(status[first], 2)
for second in range(first, len(status)):
second_status = round(status[second], 2)
for third in range(second, len(status)):
third_status = round(status[third], 2)
for fourth in range(third, len(status)):
fourth_status = round(status[fourth], 2)
for fifth in range(fourth, len(status)):
fifth_status = round(status[fifth], 2)
for sixth in range(fifth, len(status)):
sixth_status = round(status[fifth], 2)
for seventh in range(sixth, len(status)):
seventh_status = round(status[sixth], 2)
print('')
print(first_req, second_req, third_req, fourth_req, fifth_req, sixth_req, seventh_req,) # you can replace this with your function
print(first_status, second_status, third_status, fourth_status, fifth_status, sixth_status, seventh_status,)
| |
doc_23538053
|
When I copy the worksheet Right-Click > Move or Copy in the same workbook I get sheet Sheet1 (2).
The Table on this sheet is automatically named Table13.
I do some processing in that copied sheet and subsequently remove it. Leaving the workbook with only its original Sheet1.
Each time I make a copy of Sheet1 the table in the copied sheet is incremented by one.
Also if I remove the sheet and add a new one. It keeps incrementing.
I use the workbook and Sheet1 as a template and I create via a macro a lot of copies.
The new Table Name has now Incremented to Table21600.
I found out that Excel will give an overflow when I reach approximately Table21650.
So, I need a way to reset the Name counter of the added table.
Does anyone know how to achieve this?
A: You can access (and alter) the names of each table ("ListObject") from your macro-code as shown in this example:
Sub ListAllListObjectNames()
Dim wrksheet As Worksheet
Dim lstObjct As ListObject
Dim count As Integer
count = 0
For Each wrksheet In ActiveWorkbook.Worksheets
For Each lstObjct In wrksheet.ListObjects
count = count + 1
lstObjct.Name = "Table_" & CStr(count)
Debug.Print wrksheet.Name, ": ", lstObjct.Name
Next
Next
End Sub
A: Reset Table 'Counter'
*
*Allthough the 'counter' will not stop incrementing, when you close
the workbook and open it the next time, it will again start from
Table13.
*In the Immediate window CRTL+G you will see the table name
before and after the renaming. When done testing just out comment the
lines containing Debug.Print.
The First Code
' Copies a sheet and renames all its tables.
Sub CopySheetWithTable(Optional SheetNameOrIndex As Variant = "Sheet1", _
Optional NewTableName As String = "Tbl")
Dim MySheet As Worksheet
Dim MyCopy As Worksheet
Dim MyTable As ListObject
Dim i As Long
Set MySheet = ThisWorkbook.Worksheets(SheetNameOrIndex)
'MySheet.Copy MySheet ' Before e.g. Sheet1)
MySheet.Copy , MySheet ' After e.g. Sheet1
Set MyCopy = ActiveSheet
For Each MyTable In MyCopy.ListObjects
i = i + 1
Debug.Print "Old Table Name = " & MyTable.Name
MyTable.Name = NewTableName & i
Debug.Print "Old Table Name = " & MyTable.Name
Next
End Sub
Usage
*
*Copy the previous and the following sub into a module. Run the
following sub to copy a new worksheet. Adjust if you want it before
or after the sheet to be copied.
*You don't need to copy the worksheet manually anymore.
The Second Code
' You can create a button on the worksheet and use this one-liner in its code.
Sub CopySheet()
CopySheetWithTable ' Default is CopySheetWithTable "Sheet1", "Tbl"
End Sub
Delete all Sheets After Worksheet
This is just a testing tool.
' Deletes all sheets after the selected sheet (referring to the tab order).
Sub DeleteSheetsAfter(DeleteAfterSheetNameOrIndex As Variant) 'Not tested.
Dim LastSheetNumber As Long
Dim SheetsArray() As Variant
Dim i As Long
' Try to find the worksheet in the workbook containing this code.
On Error Resume Next
LastSheetNumber = _
ThisWorkbook.Worksheets(DeleteAfterSheetNameOrIndex).Index
If Err.Number <> 0 Then
MsgBox "There is no Sheet '" & DeleteAfterSheetNameOrIndex & "' " _
& "in (this) workbook '" & ThisWorkbook.Name & "'."
Exit Sub
End If
With ThisWorkbook
ReDim SheetsArray(.Sheets.Count - LastSheetNumber - 1)
For i = LastSheetNumber + 1 To .Sheets.Count
SheetsArray(i - LastSheetNumber - 1) = i
Next
End With
Application.DisplayAlerts = False
ThisWorkbook.Sheets(SheetsArray).Delete
Application.DisplayAlerts = True
MsgBox "Deleted " & UBound(SheetsArray) & " worksheets after worksheet '" _
& ThisWorkbook.Worksheets(DeleteAfterSheetNameOrIndex).Name & "'.", _
vbInformation, "Delete Successful"
End Sub
Sub DeleteAfter()
DeleteSheetsAfter "Sheet1"
End Sub
| |
doc_23538054
|
<xsl:if test="not(document('some_external_doc.xml')//myxpath)">
<xsl:message terminate="yes">ERROR: Missing element!</xsl:message>
<h1>Error detected!</h1>
</xsl:if>
The missing document/xpath is detected by the <xsl:if> and the <h1> will be displayed, but for some reason the terminate attribute of the <xsl:message> is ignored. The transformation is done in Railo, so the XSLT processor use should be the Java default, but I wasn't able to find something definitive about the processor Railo is using.
A: You have the right idea, however...
If your XSLT processor implements XSLT 1.0, it technically does not have to terminate. Note the use of the word should rather than must in the spec for xsl:message:
If the terminate attribute has the value yes, then the XSLT processor
should terminate processing after sending the message. The default
value is no.
Interestingly, XSLT 2.0 changes the should to a must:
If the effective value of the terminate attribute is yes, then the
processor must terminate processing after sending the message.
Note also that order of execution of xsl:message statements is processor-dependent; keep this in mind when looking for xsl:message output in the logs.
Finally, you have some additional exception processing options under XSLT 2.0 (error() function) and XSLT 3.0 (xsl:try and xsl:catch).
| |
doc_23538055
|
Here is our Model:
public class Request
{
...
public int UserId { get; set; }
[NotMapped]
public string UserName {get; set; }
}
Because we're using OData, rather than being serialized through the default JsonMediaTypeFormatter, it goes through the OdataMediaTypeFormatter which completely ignores anything with the [NotMapped] attribute.
We could work around this problem by manually adding the properties to the modelBuilder. However this becomes an issue when trying to integrate with Breeze, because they have their own custom EdmBuilder that must be used for things like navigable properties to be preserved, and we cannot use the standard ODataConventionModelBuilder. This custom builder doesn't seem to allow for any level of control over the models. Is it at all possible to force OData to properly serialize these properties and also keep metadata that's complaint with Breeze? Has anyone tried something similar before?
Side note:
We're trying to avoid storing or just making dummy columns in the db for this data, seeing as we need 5 of these properties, but this may wind up being our course of action if we dump too much more time into this.
Thanks in Advance
A: In terms of serialization, what is hurting you is the intermediate EdmBuilder that is supplied by breeze. See: https://github.com/Breeze/breeze.server.labs/blob/master/EdmBuilder.cs
Because of the limitations defined in the comments of the EdmBuilder.cs
We need the EDM both to define the Web API OData route and as a source of metadata for the Breeze client. The Web API OData literature recommends the System.Web.Http.OData.Builder.ODataConventionModelBuilder.
That component is suffient for route definition but fails as a source of metadata for Breeze because (as of this writing) it neglects to include the foreign key definitions Breeze requires to maintain navigation properties of client-side JavaScript entities.
This EDM Builder ask the EF DbContext to supply the metadata which satisfy both route definition and Breeze.
You're only getting the metadata the EntityFramework chooses to expose. This prevents the OData formatters/serializers from including the property - it's not mapped in the model metadata.
You could attempt a solution with a custom serializer, similar to what is represented in this article. Using OData in webapi for properties known only at runtime
A custom serializer would look roughly like this (Note: this DOES NOT work.. continue reading, below...)
public class CustomEntitySerializer : ODataEntityTypeSerializer
{
public CustomEntitySerializer(ODataSerializerProvider serializerProvider) : base(serializerProvider) { }
public override ODataEntry CreateEntry(SelectExpandNode selectExpandNode, EntityInstanceContext entityInstanceContext)
{
ODataEntry entry = base.CreateEntry(selectExpandNode, entityInstanceContext);
Request item = entityInstanceContext.EntityInstance as Request;
if (entry != null && item != null)
{
// add your "NotMapped" property here.
entry.Properties = new List<ODataProperty>(entry.Properties) { new ODataProperty { Name = "UserName", Value = item.UserName} };
}
return entry;
}
}
The trouble with this is that the underlying ODataJsonLightPropertySerializer checks the model for the existence of the property as it's attempting to write. It calls the ValidatePropertyDefined method in the Microsoft.Data.OData.WriterValidationUtils class.
internal static IEdmProperty ValidatePropertyDefined(string propertyName, IEdmStructuredType owningStructuredType)
This will fail you with the runtime exception:
The property 'UserName' does not exist on type 'YourNamespace.Models.Request'
. Make sure to only use property names that are defined by the type.","type":"Microsoft.Data.OData.ODataException"
,"stacktrace":" at Microsoft.Data.OData.WriterValidationUtils.ValidatePropertyDefined(String propertyName
, IEdmStructuredType owningStructuredType)\r\n at Microsoft.Data.OData.JsonLight.ODataJsonLightPropertySerializer
.WriteProperty(ODataProperty property, IEdmStructuredType owningType, Boolean isTopLevel, Boolean allowStreamProperty
, DuplicatePropertyNamesChecker duplicatePropertyNamesChecker, ProjectedPropertiesAnnotation projectedProperties
Bottom line is that the property needs to be defined in the model in order to serialize it. You could conceivably rewrite large portions of the serialization layer, but there are lots of internal/static/private/non-virtual bits in the OData framework that make that unpleasant.
A solution is ultimately presented in the way Breeze is forcing you to generate the model, though. Assuming a code-first implementation, you can inject additional model metadata directly into the XmlDocument produced by EntityFramework. Take the method in the Breeze EdmBuilder, with some slight modifications:
static IEdmModel GetCodeFirstEdm<T>(this T dbContext) where T : DbContext
{
// create the XmlDoc from the EF metadata
XmlDocument metadataDocument = new XmlDocument();
using (var stream = new MemoryStream())
using (var writer = XmlWriter.Create(stream))
{
System.Data.Entity.Infrastructure.EdmxWriter.WriteEdmx(dbContext, writer);
stream.Position = 0;
metadataDocument.Load(stream);
}
// to support proper xpath queries
var nsm = new XmlNamespaceManager(metadataDocument.NameTable);
nsm.AddNamespace("ssdl", "http://schemas.microsoft.com/ado/2009/02/edm/ssdl");
nsm.AddNamespace("edmx", "http://schemas.microsoft.com/ado/2009/11/edmx");
nsm.AddNamespace("edm", "http://schemas.microsoft.com/ado/2009/11/edm");
// find the node we want to work with & add the 1..N property metadata
var typeElement = metadataDocument.SelectSingleNode("//edmx:Edmx/edmx:Runtime/edmx:ConceptualModels/edm:Schema/edm:EntityType[@Name=\"Request\"]", nsm);
// effectively, we want to insert this.
// <Property Name="UserName" Type="String" MaxLength="1000" FixedLength="false" Unicode="true" Nullable="true" />
var propElement = metadataDocument.CreateElement(null, "Property", "http://schemas.microsoft.com/ado/2009/11/edm");
propElement.SetAttribute("Name", "UserName");
propElement.SetAttribute("Type", "String");
propElement.SetAttribute("FixedLength", "false");
propElement.SetAttribute("Unicode", "true");
propElement.SetAttribute("Nullable", "true");
// append the node to the type element
typeElement.AppendChild(propElement);
// now we're going to save the updated xml doc and parse it.
using (var stream = new MemoryStream())
{
metadataDocument.Save(stream);
stream.Position = 0;
using (var reader = XmlReader.Create(stream))
{
return EdmxReader.Parse(reader);
}
}
}
This will place the property into the metadata to be consumed by the OData layer and make any additional steps to promote serialization unnecessary. You will, however, need to be mindful of how you shape the model metadata, as any requirements/specs will be reflected in the client side validation in Breeze.
I have validated the CRUD operations of this approach in the ODataBreezejs sample provided by Breeze. https://github.com/Breeze/breeze.js.samples/tree/master/net/ODataBreezejsSample
| |
doc_23538056
|
BEGIN
DECLARE ...
CREATE TEMPORARY TABLE tmptbl_found (...);
PREPARE find FROM" INSERT INTO tmptbl_found
(SELECT userid FROM
(
SELECT userid FROM Soul
WHERE
.?.?.
ORDER BY
.?.?.
) AS left_tbl
LEFT JOIN
Contact
ON userid = Contact.userid
WHERE Contact.userid IS NULL LIMIT ?)
";
DECLARE iter CURSOR FOR SELECT userid, ... FROM Soul ...;
...
l:LOOP
FETCH iter INTO u_id, ...;
...
EXECUTE find USING ...,. . .,u_id,...;
...
END LOOP;
...
END//
and it gives multi-results. Besides it's inconvenient, if i get all this multi-results (which i really don't need at all), about 5 (limit's param) for each of the hundreds of thousands of records in Soul, i'm afraid it will take all my memory (and all in vain).
Also, i noticed, if i do prepare from an empty string, it still has multi-results...
At least how to get rid of them in the execute statement?
And i would like to have a recipe to avoid ANY output from SP, for any possible statement
(i also have a lot of "update ..."s and "select ... into "s inside, if they can produce multi's).
Tnx for any help...
A: Well. I'll just say that it has come out that there wasn't really a problem. I didn't investigate hard, but it looks like the server didn't actually try to execute the statement ("call Proc();") to see whether there will be any results to return - it just looked at the code and assumed that there will be multiple result sets, requiring connection to be capable of handling them. But in PhpMyAdmin, which i was using at the time, it wasn't. However, issuing the same command from the MySQL command line client did the trick - no complaining about the given connection context, and no multis, too, because they don't have to be there - it's just a MySQL's estimation. I didn't have to conclude from the error, that the SP like this one will certainly return multis in MySQL, flushing all the intermediately fetched data, which i will need to suppress somehow.
It may be not so as i supposed, but the problem is gone now.
| |
doc_23538057
|
Interface 'ItemI<T, P>' incorrectly extends interface 'P'.
'ItemI<T, P>' is assignable to the constraint of type 'P', but 'P' could be instantiated with a different subtype of constraint '_ItemFundamentI<T>'.
export interface ItemBuildI<T> {
readonly name : string,
readonly props : T,
}
export interface _ItemFundamentI<T> extends ItemBuildI<T> {
readonly __typename__ : typeof ITEM_TYPENAME,
readonly __itemTypename__ : string,
readonly key : string,
readonly createdAt : Date,
readonly modifiedAt : Date,
readonly context : string | undefined,
}
export interface ItemI<T, P extends _ItemFundamentI<T>> extends P, _ItemFundamentI<T> {
}
While I understand what this message is saying, I'm confused as to why P extends _ItemFundamentI<T> is not recognized as an object which extends _ItemFundamentI<T>. Is it because I could potentially overwrite the values in an extension?
If so, how best can I coerce this type recognition? Is there way I can protect certain object keys from future overwrites?
| |
doc_23538058
|
Ok here's my problem.
I have a page which refreshes every few seconds to fetch new data from the database via AJAX. Ideally the structure of the returned value should be as such:
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
After receiving this table row structure result, I need to append that to the current table in the page which essentially just adds a new record on the table if there are new records. If there's none, then no changes to the table are made.
Currently I am using
var req = new Request.HTML({url: url_to_get_new_rows,
onSuccess: function(html, responseHTML) {
// append table row 'html' here
}
}).send();
However, the returned value in the 'html' variable that I'm supposed to append at the end of the table only returns
1 2 3 4
This obviously is an undesired behavior as I need the tr and td elements to make it work.
I hope someone could help me with this problem.
THANKS!
A: Javascript:
new Request.HTML({
url:'tr.php',
onSuccess: function(responseTree, responseElements, responseHTML, responseJavaScript) {
var tbody = document.id('tbody');
tbody.set('html', tbody.get('html') + responseHTML);
// or
var tr = new Element('table', {'html': responseHTML}).getElement('tr');
tr.inject(tbody);
}
}).get();
HTML:
<table>
<thead>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody id="tbody">
<tr>
<td>a</td>
<td>b</td>
<td>c</td>
<td>d</td>
</tr>
</tbody>
</table>
| |
doc_23538059
|
Now I can run the whole site with out error.But If i apply leave then it will show an error.
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)
[Microsoft][ODBC Microsoft Access Driver] Operation must use an
updateable query. /eleave/leaveApplicationOut.asp, line 39
Updation
After giving the Write permission following error is showing
Error Type:
jmail.Message (0x8000FFFF) The message was undeliverable. All servers
failed to receive the message /eleave/leaveApplicationOut.asp, line 80
Thank you very much for your support.
It is solved..
A: 4 possible causes are highlighted here: http://support.microsoft.com/kb/175168
I am guessing it's #1:
The most common reason is that the Internet Guest account (IUSR_MACHINE), which is by default part of the "Everyone" group, does not have Write permissions on the database file (.mdb). To fix this problem, use the Security tab in Explorer to adjust the properties for this file so that the Internet Guest account has the correct permissions.
A: First error (which seems like you solved) has to do with write permissions on the database..
The updated question ,though, seem to be completely unrelated..
You seem to be trying to send an email, right ? and it says it failed..
Perhaps the SMTP service is not running and so it cannot send the email ? could it be a wrong IP address defined somewhere ? wrong credentials for the email accounts ? (read http://host.cdesystems.com/faq/jmail_faq.asp for possible problem)
give some code about the configuration you do to the jmail ..
| |
doc_23538060
|
SELECT DISTINCT c.Section
FROM c
WHERE c.brand = 'monki'
AND c.Consumer = 'Storelens_V2'
So I changed it to this
SELECT DISTINCT VALUE c.Section
FROM c
WHERE c.brand = 'monki'
AND c.Consumer = 'Storelens_V2'
but this gives the error
Failed to query item for container formatteddata:
Cannot set property 'headers' of undefined
How can I use distinct and Value at the same time?
A: SELECT DISTINCT VALUE(c.Section)
FROM c
WHERE c.brand = 'monki'
AND c.Consumer = 'Storelens_V2'
A: As weird as this is, by filtering out null values this query works, I really dont understand why but simply adding a where clause where distinct_property != null solves the problem directly
| |
doc_23538061
|
It happens time to time in my production environment and I can't figure out the reason.
| |
doc_23538062
|
For example.
else if(strcmp(argv[1],"wait") == 0 )
Works perfectly when I type 'wait 2', it executes the code located in that if-statement, BUT if I try to type just 'wait' (only one argument), it doesn't recognize it and doesn't go to this function.
Why is it not working, despite the fact that argv[0] DOES match 'wait'? Thank you!
A: argv[0] is the name of the executable.
Probable want argv[1] where (after checking argc)
A: argv[0] is your program name. You must type argv[1] instead.
| |
doc_23538063
| ||
doc_23538064
|
I load the data into R via the read.csv function which stores the data in a data.frame object with 2 columns. I perform some manipulations to transform the object into a zoo object with the index set to the date. So now the object has one column, which is suppose to be numeric data and the date index.
The problem is the data has the character string "ND" randomly scattered about. I want to extract only those rows of the zoo object that do not contain "ND".
yr2 is the zoo object of concern.
Example:
03/15/2011 0.63
03/16/2011 0.58
03/17/2011 0.60
03/18/2011 0.61
03/21/2011 0.67
03/22/2011 ND
03/23/2011 0.69
03/24/2011 0.72
03/25/2011 0.79
03/28/2011 0.81
03/29/2011 0.81
03/30/2011 0.80
03/31/2011 0.80
I have tried the following:
> yr2[!="ND"]
Error: unexpected '!=' in "yr2[!="
> yr2[yr2[!="ND"]]
Error: unexpected '!=' in "yr2[yr2[!="
>
> yr2[!is.character(yr2)]
Data:
character(0)
Index:
Data:
named character(0)
Index:
integer(0)
I would greatly appreciate some guidance. Thank you.
A: Does it make sense to address the offending "ND" data before converting it into a zoo object? Does ND stand for "no data", i.e. should be interpreted as NA?
txt <- "03/15/2011 0.63
03/16/2011 0.58
03/17/2011 0.60
03/18/2011 0.61
03/21/2011 0.67
03/22/2011 ND
03/23/2011 0.69
03/24/2011 0.72
03/25/2011 0.79
03/28/2011 0.81
03/29/2011 0.81
03/30/2011 0.80
03/31/2011 0.80"
#If ND == NA
dat <- read.table(textConnection(txt), header = FALSE, na.strings = "ND")
#if not
dat <- read.table(textConnection(txt), header = FALSE)
dat[dat$V2 != "ND" ,]
#or
subset(dat, V2 != "ND")
A: Try this:
Lines <- "03/15/2011 0.63
03/16/2011 0.58
03/17/2011 0.60
03/18/2011 0.61
03/21/2011 0.67
03/22/2011 ND
03/23/2011 0.69
03/24/2011 0.72
03/25/2011 0.79
03/28/2011 0.81
03/29/2011 0.81
03/30/2011 0.80
03/31/2011 0.80"
library(zoo)
z <- read.zoo(textConnection(Lines), format = "%m/%d/%Y", na.strings = "ND")
zz <- na.omit(z)
plot(zz)
| |
doc_23538065
|
class _CountDownTimerState extends State<CountDownTimer>
with TickerProviderStateMixin {
AnimationController controller;
String get timerString {
Duration duration = controller.duration * controller.value;
return '${duration.inMinutes}:${(duration.inSeconds % 60).toString().padLeft(2, '0')}';
}
@override
void initState() {
super.initState();
controller = AnimationController(
vsync: this,
duration: Duration(seconds: 60),
);
}
and this is how I write my code for the button with +30s but it only reset the time to 00:30seconds.
child: RaisedButton(
onPressed: () {
if (controller.isAnimating)
controller.duration = Duration(seconds: 30);
},
color: Colors.white,
shape: RoundedRectangleBorder(
borderRadius: new BorderRadius.circular(30),
),
textColor: Colors.black,
child: Text("Give player +30s ")),
),
A: Here is a Solution using an AnimationController combined with a Timer. I used an initial time of 10 seconds and a time increase of 5 seconds for a faster demo.
import 'dart:async';
import 'package:flutter/material.dart';
void main() {
runApp(
MaterialApp(
debugShowCheckedModeBanner: false,
title: 'Timer Demo',
home: HomePage(),
),
);
}
class HomePage extends StatefulWidget {
@override
_HomePageState createState() => _HomePageState();
}
class _HomePageState extends State<HomePage> {
bool done = false;
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: done
? Text('TIME OUT')
: CountDownTimer(
onCompleted: () => setState(() => done = true),
),
),
);
}
}
class CountDownTimer extends StatefulWidget {
final VoidCallback onCompleted;
const CountDownTimer({
Key key,
this.onCompleted,
}) : super(key: key);
@override
_CountDownTimerState createState() => _CountDownTimerState();
}
class _CountDownTimerState extends State<CountDownTimer>
with TickerProviderStateMixin {
AnimationController _controller;
Timer _timer;
int _elapsed;
@override
void initState() {
super.initState();
// AnimationController
_controller = AnimationController(vsync: this, duration: kInitialTime);
_controller.addListener(() => setState(() {}));
_controller.addStatusListener(
(AnimationStatus status) {
if (status == AnimationStatus.completed) {
_timer.cancel();
widget.onCompleted();
}
},
);
// Elapsed Counter
_elapsed = 0;
_timer = Timer.periodic(
Duration(seconds: 1),
(_) => setState(() => _elapsed++),
);
// Launch the Controller
_controller.forward();
}
void increaseTime([int extraTime = kExtraTime]) {
_controller.duration =
Duration(seconds: _controller.duration.inSeconds + extraTime);
_controller.reset();
_controller.forward(from: _elapsed / _controller.duration.inSeconds);
}
@override
void dispose() {
_controller.dispose();
_timer.cancel();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
CircularProgressIndicator(
value: _controller.value,
backgroundColor: Colors.black12,
),
const SizedBox(height: 12.0),
Text('$_elapsed / ${_controller.duration.inSeconds}'),
const SizedBox(height: 24.0),
ElevatedButton(
onPressed: () => increaseTime(),
child: Text('MORE TIME'),
),
],
);
}
}
// CONFIGURATION
const Duration kInitialTime = Duration(seconds: 10);
const int kExtraTime = 5;
A: You are setting the duration to 30 seconds, what you need to do instead is add to the remaining duration. You can probably use the + operator of the Duration class: https://api.dart.dev/stable/2.10.5/dart-core/Duration/operator_plus.html
Something like this:
onPressed: () {
if (controller.isAnimating) {
controller.duration += Duration(seconds: 30);
}
},
Not sure if the controller actually supports this.
| |
doc_23538066
|
So far I have
$maybe muid <- maybeAuthId
<a href=@{AuthR LogoutR} >Logout
$nothing
<a href=@{AuthR LoginR} >Login
but I get an error:
Couldn't match expected type `Maybe v0'
with actual type `GHandler s0 m0 (Maybe (AuthId m0))'
In the first argument of `Text.Hamlet.maybeH', namely `maybeAuthId'
A: maybeAuthId is a monadic action that performs database and session-related operations. You can't have monadic actions in the definition of a Hamlet template. Imagine what would happen if you wrote this (a similar monadic action):
$maybe a <- liftIO (putStrLn "Hello World") >> return (Just "Hi")
<p>Just #{a}
$nothing
<p>Nothing
How often should that action be executed; every time the template is rendered? When it's loaded? It might get very messy if it did something other than just printing "Hello World" to the terminal, and even then it's not very safe -- would you expect your template files to be able to print to the terminal, launch nukes or steal your credit card information?
That's why only pure values are allowed in all Shakespearean templates. You need to do this instead:
getMyHandlerR :: Handler RepHtml
getMyHandlerR = do
muid <- maybeAuthId
$(widgetFile "foo")
(foo.hamlet:)
$maybe uid <- muid
<p>Foo
$nothing
<p>Bar
As you can see, the maybeAuthId function will be executed outside of the template, and the result is matched within the template. That way, you can make sure that your session/database is checked at a specific point in time that you can determine, and that your template doesn't inject a virus because your designer didn't get paid enough and acted out his revenge on you.
By the way, you might want to use a Bool to indicate whether the user is logged in and use an $if statement instead. You might want to use the isJust function from the Data.Maybe module for that.
| |
doc_23538067
|
<?php
$video_array = array
('http://www.youtube.com/embed/rMNNDINCFHg',
'http://www.youtube.com/embed/bDF6DVzKFFg',
'http://www.youtube.com/embed/bDF6DVzKFFg');
$total = count($video_array);
$random = (mt_rand()%$total);
$video = "$video_array[$random]";
?>
that I'm trying to put into this:
<iframe width='1006' height='421' src='<?php echo $video; ?>' frameborder='0' allowfullscreen></iframe>
but it seems like it's not working. Could you guys help me out? Thanks! I've been looking for 2 hours and got the same bad results.
A: <?php
$video_array = array
('http://www.youtube.com/embed/rMNNDINCFHg',
'http://www.youtube.com/embed/bDF6DVzKFFg',
'http://www.youtube.com/embed/bDF6DVzKFFg');
shuffle($video_array);
$video = $video_array[0];
?>
then you can just embed it.
<iframe width='1006' height='421' src='<?php echo $video; ?>' frameborder='0' allowfullscreen></iframe>
A: Not sure what why you are doing this:
$video = "$video_array[$random]";
Make that look like this
$video = $video_array[$random];
A: I'm all about one-liners :x
<?php echo $video_array[rand(0,(count($video_array)-1))]; ?>
Other answers thus far are not reducing count()... this will cause errors when it grabs the highest number, as the beginning index==0
A: Try this code...
<?php
$video_array = array
('http://www.youtube.com/embed/rMNNDINCFHg',
'http://www.youtube.com/embed/bDF6DVzKFFg',
'http://www.youtube.com/embed/bDF6DVzKFFg');
$total = count($video_array);
$random = rand(0, $total-1);
$video = $video_array[$random];
?>
| |
doc_23538068
|
The XML For Arabic textview
<TextView
android:id="@+id/card_title"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:fontFamily="@font/cairo_bold"
android:padding="15dp"
android:textColor="@android:color/black"
android:textDirection="rtl"
android:textSize="26sp"
android:textStyle="bold" />
As for English textview
<TextView
android:id="@+id/title"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_marginLeft="10dp"
android:layout_weight="1"
android:fontFamily="@font/quicksand"
android:gravity="center_vertical"
android:text="@string/player_header"
android:textColor="@color/white"
android:textSize="14sp" />
| |
doc_23538069
|
-(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{
static NSString *cellIdentifier = @"cell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier];
if(cell == nil){
cell = [[UITableViewCell alloc]initWithStyle:UITableViewCellStyleDefault reuseIdentifier:cellIdentifier];
cell.selectionStyle = UITableViewCellStyleDefault;
}
cell.contentView.backgroundColor = [UIColor clearColor];
cell.textLabel.text = @"testing";
if(indexPath.row == 1){
cell.textLabel.textColor = UIColorMake(kOrangeColour);
}
return cell;
}
I have a tableview with 14 rows in storyboard.I made change color of 2 row (i.e.1 index).When i scroll up and down the tableview for many times then i found the 14 row(i.e 13 index) color also changed.now both the row 1 and 14 are in orange color.As i coded it should change color of only one row.Why this happening?any help will be appreciated. thanks in advance.
A: Use
if(indexPath.row == 1){
cell.textLabel.textColor = UIColorMake(kOrangeColour);
} else {
cell.textLabel.textColor = [UIColor whiteColor]; // change is here
}
instead of
if(indexPath.row == 1){
cell.textLabel.textColor = UIColorMake(kOrangeColour);
}
During reusability of UITableViewCell, you have to declare the if and else.
| |
doc_23538070
|
Entity:
public static class IRowVersionExtensions
{
public static string RowVersionAsString(this IRowVersion ivr)
{
return Convert.ToBase64String(ivr.RowVersion);
}
public static void SetRowVersion(this IRowVersion ivr, string rowVersion)
{
ivr.RowVersion = Convert.FromBase64String(rowVersion);
}
}
public interface IRowVersion
{
byte[] RowVersion { get; set; }
}
public class Department : IRowVersion
{
[Key]
public int Id { get; set; }
[Required, MaxLength(255)]
public string Name { get; set; }
public string Description { get; set; }
[Timestamp]
[ConcurrencyCheck]
public byte[] RowVersion { get; set; }
}
DbContext:
public class CompDbContext : DbContextEx
{
public CompDbContext()
: base("Company")
{
this.Configuration.LazyLoadingEnabled = false;
}
public DbSet<Department> Departments { get; set; }
}
The desktop application (console app) has the following code, and throws a DbConcurrencyException as expected: http://pastebin.com/i6yAmVGc
Now, the API controller - when I open the page in two windows and edit one (and save) then try to edit/save the other, it does not throw an exception:
Api Controller Update Action:
[HttpPatch, Route("")]
public Department UpdateDepartment(Department changed)
{
var original = dbContext.Departments.Find(changed.Id);
if (original == null)
this.NotFound();
if (Convert.ToBase64String(changed.RowVersion) != Convert.ToBase64String(original.RowVersion))
Console.WriteLine("Should error.");
original.RowVersion = changed.RowVersion;
original.Name = changed.Name;
original.Description = changed.Description;
dbContext.SaveChanges();
return original;
}
Api Call:
DepartmentVM.prototype.onSave = function (entity) {
var method = entity.id() ? 'PATCH' : 'PUT';
$.ajax({
url: '/api/departments',
method: method,
data: ko.toJSON(entity),
contentType: 'application/json',
dataType: 'JSON'
})
.done(function (data) {
alert('Saved');
entity.rowVersion(data.rowVersion);
entity.id(data.id);
})
.error(function (data) {
alert('Unable to save changes to department.');
});
};
When I break on the line in the controller action:
if (Convert.ToBase64String(changed.RowVersion) != Convert.ToBase64String(original.RowVersion))
On the first save, the changed.RowVersion == original.RowVersion (perfect) and it saves (as expected). On the second page's save, the changed.RowVersion != original.RowVersion (perfect) but it still saves, no exception (not as expected).
Can some one help me understand why this works just fine in a desktop application but does not work in a Web API?
A: It's not working because EF uses the "original" value of RowVersion to perform the concurrency check. In your example, the original value (as far as the DbContext is concerned) is the value from the database, because it was loaded from the database using .Find().
Say, for example, the RowVersion of the changed entity is 1, and the current RowVersion in the database is 2...
// changed's RowVersion is 1
var original = dbContext.Departments.Find(changed.Id);
// original's RowVersion is 2
if (original == null)
this.NotFound();
if (Convert.ToBase64String(changed.RowVersion) != Convert.ToBase64String(original.RowVersion))
Console.WriteLine("Should error."); // 2 != 1, so prints this line
original.RowVersion = changed.RowVersion;
// original's "current" RowVersion is now 1
// ... but its "original" RowVersion is still 2!
original.Name = changed.Name;
original.Description = changed.Description;
dbContext.SaveChanges();
// UPDATE DEPT SET ... WHERE Id = ... AND RowVersion = 2
// (works, therefore no concurrency exception)
To make this work, you can just add the incoming entity to the context...
[HttpPatch, Route("")]
public Department UpdateDepartment(Department changed)
{
dbContext.Entry(changed).State = EntityState.Modified;
dbContext.SaveChanges();
// you'll get an exception if RowVersion has changed
return changed;
}
If you only want to change Name and Description, you can selectively mark those properties as changed and the rest are not updated...
[HttpPatch, Route("")]
public Department UpdateDepartment(Department changed)
{
dbContext.Entry(changed).State = EntityState.Unchanged;
dbContext.Entry(changed).Property(d => d.Name).IsModified = true;
dbContext.Entry(changed).Property(d => d.Description).IsModified = true;
dbContext.SaveChanges();
// you'll get an exception if RowVersion has changed
return changed;
}
The reason the console app worked was a bit lucky. There's a race condition in which if the Find() in t1 executes after the SaveChanges() in t2 (or vice versa), you'd run into the same situation.
| |
doc_23538071
|
I have two MySQL tables, CurrencyTable and CurrencyValueTable.
The CurrencyTable holds the names of the currencies as well as their description and so forth, like so:
CREATE TABLE CurrencyTable ( name VARCHAR(20), description TEXT, .... );
The CurrencyValueTable holds the values of the currencies during the day - a new value is inserted every 2 minutes when the market is open. The table looks like this:
CREATE TABLE CurrencyValueTable ( currency_name VARCHAR(20), value FLOAT, 'datetime' DATETIME, ....);
I have two questions regarding this design:
1) I have more than 200 currencies. Is it better to have a separate CurrencyValueTable for each currency or hold them all in one table?
2) I need to be able to show the current (latest) value of the currency. Is it better to just insert such a field to the CurrencyTable and update it every two minutes or is it better to use a statement like:
SELECT value FROM CurrencyValueTable ORDER BY 'datetime' DESC LIMIT 1
The second option seems slower.. I am leaning towards the first one (which is also easier to implement).
Any input would be greatly appreciated!!
p.s. - please ignore SQL syntax / other errors, I typed it off the top of my head..
Thanks!
A: To your questions:
*
*I would use one table. Especially if you need to report on or compare data from multiple currencies, it will be incredibly improved by sticking to one table.
*If you don't have a need to track the history of each currency's value, then go ahead and just update a single value -- but in that case, why even have a separate table? You can just add "latest value" as a field in the currency table and update it there. If you do need to track history, then you will need the two tables and the SQL you posted will work.
As an aside, instead of FLOAT I would use DECIMAL(10,2). After MySQL 5.0, this will actually have improved results when it comes to currency handling with rounding.
A: *
*It is better to have one table holding all currencies
*If there is need for historical prices, then the table needs to hold them. A reasonable compromise in many situations is to split the price table into a full list of historical prices and another table which only has the current prices.
*Using data type float can be troublesome. Please be sure you know what you are doing. If not, use a database currency data type.
A: *
*As your webservice is transactional it is better if you'd have to access less tables at the same time. Since you will be reading and writing a lot, I would suggest having a single table.
*Its better to insert a field to the CurrencyTable and update it rather than hitting two tables for a single request.
| |
doc_23538072
|
The first row shows true for all three ISSUBTOTAL statements. This is a grand total, not a year, month or day total.
The second row shows False for Is Year Total yet I would expect a True here since it is a yearly total. I don't know why Is Month Total and Is Day Total show true.
The last row shows False for Is Day Total yet I would expect True since this is a daily total.
A: A total in DAX perspective is one where there is no context contributed by the column in context. Your [Is Year Total] indicates TRUE when no year is in context (i.e. total across all years) and FALSE when a year is in context (i.e. measure is filtered by a year). So for your second row where we have the following:
[Is Year Total] = FALSE
[Is Month Total] = TRUE
[Is Day Total] = TRUE
This indicates that there is a year in context (2008), but there is no month or day in context. So a measure would be evaluated with a filter context of [Year]=2008.
A: Subtotal:
the total of one set of a larger group of figures to be added.
I would interprete this for the the second row. You are looking at the Year level, at a single Value of the whole column, so it is not a subtotal of Year.
| |
doc_23538073
|
The command would look like {MOVE, 60, 70} or {REQUEST_DATA}, where I'd have the arduino read in the first value, if it's "MOVE" then it drives some motors with speed 60 and 70, and if it's "REQUEST_DATA" it would respond with some data like battery status, gps location etc.
Sending this as a string of characters and then parsing is really a huge pain! I've tried days (!frustration!) without it working properly. Is there a way to serialize a data structure like {'MOVE', 70, 40}, send the bytes to the arduino and reconstruct into a struct there? (Using struct.pack() maybe? But I don't yet know how to "unpack" in the arduino).
I've looked at serial communication on arduino and people seem to just do it the 'frustrating' way - sending single chars. Plus all talk about sending struct from arduino to python, and not the other way round.
A: There are a number of ways to tackle this problem, and the best solution depends on exactly what data you're sending back and forth.
The simplest solution is to represent commands a single bytes (e.g., M for MOVE or R for REQUEST_DATA), because this way you only need to read a single byte on the arduino side to determine the command. Once you know that, you should know how much additional data you need to read in order to get the necessary parameters.
For example, here's a simple program that understands two commands:
*
*A command to move to a given position
*A command to turn the built-in LED on or off
The code looks like this:
#define CMD_MOVE 'M'
#define CMD_LED 'L'
struct Position {
int8_t xpos, ypos;
};
struct LEDState {
byte state;
};
void setup() {
Serial.begin(9600);
pinMode(LED_BUILTIN, OUTPUT);
// We need this so our Python code knows when the arduino is
// ready to receive data.
Serial.println("READY");
}
void loop() {
char cmd;
size_t nb;
if (Serial.available()) {
cmd = Serial.read();
switch (cmd) {
case CMD_MOVE:
struct Position pos;
nb = Serial.readBytes((char *)&pos, sizeof(struct Position));
Serial.print("Moving to position ");
Serial.print(pos.xpos);
Serial.print(",");
Serial.println(pos.ypos);
break;
case CMD_LED:
struct LEDState led;
nb = Serial.readBytes((char *)&led, sizeof(struct LEDState));
if (led.state) {
digitalWrite(LED_BUILTIN, HIGH);
} else {
digitalWrite(LED_BUILTIN, LOW);
}
Serial.print("LED is ");
Serial.println(led.state ? "on" : "off");
break;
}
}
}
A fragment of Python code that interacts with the above might look like this (assuming that port is a serial.Serial object):
print("waiting for arduino...")
line=b""
while not b"READY" in line:
line = port.readline()
port.write(struct.pack('bbb', ord('M'), 10, -10))
res = port.readline()
print(res)
for i in range(10):
port.write(struct.pack('bb', ord('L'), i%2))
res = port.readline()
print(res)
time.sleep(0.5)
port.write(struct.pack('bbb', ord('M'), -10, 10))
res = port.readline()
print(res)
Running the above Python code, with the Arduino code loaded on my Uno, produces:
waiting for arduino...
b'Moving to position -10,10\r\n'
b'LED is off\r\n'
b'LED is on\r\n'
b'LED is off\r\n'
b'LED is on\r\n'
b'LED is off\r\n'
b'LED is on\r\n'
b'LED is off\r\n'
b'LED is on\r\n'
b'LED is off\r\n'
b'LED is on\r\n'
b'Moving to position 10,-10\r\n'
This is simple to implement and doesn't require much in the way of decoding on the Arduino side.
For more complex situations, you may want to investigate more complex serialization solutions: for example, you can send JSON to the arduino and use something like https://arduinojson.org/ to deserialize it on the Arduino side, but that's going to be a much more complex solution.
In most cases, the speed at which this works is going to be limited by the speed of the serial port: the default speed of 9600bps is relatively slow, and you're going to notice that with larger amounts of data. Using higher serial port speeds will make things noticeably faster: I'm too lazy to look up the max. speed supported by the Arduino, but my UNO works at least as fast as 115200bps.
| |
doc_23538074
|
t = x // y
In this case t must be an integer number, but python returns it as float.
>>> x, y = 3.0, 2.0
>>> x // y
1.0
By the convention and documentation, the output type of x // y is float if at least one of x or y is float. My question is WHY is the convention that way: What is the advantage of getting the result in float, and not int?
For my understanding, floor division always returns an integer. So the relation int(x//y) == x//y always holds. While there are some corner cases such as nan, "integer floor division" also has corner cases such as division by zero.
So my question is why float is better? In what cases will it differ?
This is not a duplicate of this question which asks if this is a bug. Instead, I want to know what is the reason why this behaviour is advantageous.
A: TLDR: Forcing // to be (float, float) -> int would not have any benefit, but unnecessary cost and type loss whenever -> float is sufficient.
In general, computation of // when any operand is a float is only correct if performed as float: a float operand may contain a fractional part that cannot be represented as an integer.
>>> 4.3 // 1.1
3.0
>>> int(4.3) // int(1.1)
4
As such, // for any float operand is inherently a float computation and yields such a result (CPython purely uses C double arithmetic). Providing the result as int would still compute the result as float and merely convert it to int – there would be no gain in precision.
With the behaviour specified by PEP 238, programmers can decide for themselves whether they desire to enforce int type or prefer the real float result.
| |
doc_23538075
|
Is there any best way or good practices to build different layouts for multiple platforms without having to duplicate all /server and packages again in different projects? I mean, keeping everything on same place?
A: I assume you don't have to duplicate the server or anything else but the client folder content. The way I understand it, as long as you use a meteor client, the server side is agnostic of what the client specifically is.
Let's say you want a desktop bootstrap version of your app, and an ionic version for mobile. You just need to route the client on the right client subfolder (bootstrap or ionic) in the Meteor startup code for client depending on their user agent.
Unless you plan to use dedicated servers for each (meaning it would be like two different apps connecting to the same mongo database) there is no way to split everything in two versions and keep it as a single app (i.e. both mobile and desktop clients are handled by the same meteor server process).
Bottom-line: if, after evaluating it, you consider that the delta in the amount of client side code sent is two big between a dedicated version and a multipurpose version (or to rephrase it, the useless packages weight too much), then make two different servers and handle the redirection in a third. If not, keep two different clients working with the same server
| |
doc_23538076
|
I have tried multiple different input shapes including (60000, 28, 28) (1, 28, 28) (28, 28) (28, 28, 1) but none of them seem to work.
model = kr.Sequential()
model.add(InputLayer(input_shape=(60000, 28, 28)))
model.add(Dense(units=784, activation='relu'))
model.add(Dense(units=392, activation='relu'))
model.add(Dense(units=196, activation='relu'))
model.add(Dense(units=10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
training = model.fit(x=images_array, y=labels_array, epochs=10, batch_size=256)
I would expect it to work with input shape (60000, 28, 28) but I always get this error:
ValueError: Error when checking input: expected input_1 to have 4
dimensions, but got array with shape (60000, 28, 28)
Edit:
Thanks to everyone who answerd. cho_uc answer indeed worked, which is why I accepted it.
What I shold have mentioned in the post was, that I was trying to build a model consisting only of Dense layers, so I can use it as a benchmark for future models.
I solved the input layer problem with:
images_array = images_array.reshape(-1, 28 * 28)
model.add(InputLayer(input_shape=(784, )))
A: Keras Conv2D layer performs the convolution operation. It requires its input to be a 4-dimensional array.
We have to reshape the input to ( , 1, 28, 28) or possibly to ( , 28, 28, 1), depending on your setup and backend (theano or tensorlow image layout convention).
from keras import backend as K
if K.image_data_format() == 'channels_first' :
input_shape = (1, 28, 28)
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)
else:
input_shape = (28, 28, 1)
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
So, you should reshape your data to (60000, 28, 28, 1) or (60000, 1, 28, 28)
A: Two corrections are required.
*
*TF and Keras expects image dimension as (Width, Height, Channels), channels being 3 for RGB images and 1 for greyscale images.
model.add(InputLayer(input_shape=(28, 28, 1)))
*The training input to fit() method must be of dimension (Number of samples, Width, Height, Channels).
assert images_array.shape == (60000, 28, 28, 1)
| |
doc_23538077
|
A: With Trac 1.4 dropping support for ITemplateStreamFilter, the recommendation is to do interface modifications using JavaScript. You can place a JavaScript file in your site or shared htdocs directory and add the script to every page using SiteHtml customization. See Trac interface customization for more details.
You can restrict adding the JavaScript by adding a conditional check when adding the link element. For example:
<link py:if="'TRAC_ADMIN' in perm" ... />
or
<link py:if="req.authname == 'anonymous'" ... />
| |
doc_23538078
|
Sample Powershell Script
write-warning "WITHOUT SPACE"
$fl1 = "d:\nospace\a.txt"
$fl2 = "d:\nospace\b.txt"
$arg1 = "-source:filePath=`"$fl1`""
$arg2 = "-dest:filePath=`"$fl2`""
msdeploy.exe "-verb:sync",$arg1,$arg2
write-warning "WITH SPACE"
$fl1 = "d:\space space\a.txt"
$fl2 = "d:\space space\b.txt"
$arg1 = "-source:filePath=`"$fl1`""
$arg2 = "-dest:filePath=`"$fl2`""
msdeploy.exe "-verb:sync",$arg1,$arg2
When the folder name has no spaces, it works fine, however when it has a space it fails:
msdeploy.exe : Error: Unrecognized argument '"-source:filePath="d:\space'. All arguments must begin with "-".
At E:\PAWS\Payroll System\PES-Branch-FW\Publish\DeployPackage.ps1:253 char:9
+ msdeploy.exe "-verb:sync",$arg1,$arg2
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Error: Unrecogn...begin with "-".:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Error count: 1.
Manually calling msdeploy.exe using the following command:
msdeploy -verb:sync -source:filePath="d:\space space\a.txt" -dest:filePath="d:\space space\b.txt"
This works fine from Command Prompt but does not work from PowerShell.
I have used this blog as an aid but without any luck: http://trycatchfail.com/blog/post/The-trials-and-tribulations-of-using-MSDeploy-with-PowerShell.aspx
Update
I have looked into some more examples. If you perform a standard copy operation powershell is able to pass the path to cmd.exe (copy).
write-warning "WITHOUT SPACE"
$fl1 = "d:\nospace\a.txt"
$fl2 = "d:\nospace\b.txt"
$args = ('"{0}" "{1}"' -f $fl1, $fl2)
write-host $args
cmd /c copy $args
write-warning "WITH SPACE"
$fl1 = "d:\space space\a.txt"
$fl2 = "d:\space space\b.txt"
$args = ('"{0}" "{1}"' -f $fl1, $fl2)
write-host $args
cmd /c copy $args
Using the same approach to update the msdeploy snippet still fails because of the space.
write-warning "WITHOUT SPACE"
$fl1 = "d:\nospace\a.txt"
$fl2 = "d:\nospace\b.txt"
$arg1 = '-source:filePath="{0}"' -f $fl1
$arg2 = '-dest:filePath="{0}"' -f $fl2
$args = '-verb:sync',$arg1, $arg2
msdeploy.exe $args
write-warning "WITH SPACE"
$fl1 = "d:\space space\a.txt"
$fl2 = "d:\space space\b.txt"
$arg1 = '-source:filePath="{0}"' -f $fl1
$arg2 = '-dest:filePath="{0}"' -f $fl2
$args = '-verb:sync',$arg1, $arg2
msdeploy.exe $args
One Solution
https://stackoverflow.com/a/12813048/1497635
I would like to add that three escape characters is absolutely crazy. There must be a neater solution to the problem.
A: I used the suggestion from the following:
How do you call msdeploy from powershell when the parameters have spaces?
To derive a "cleaner" solution.
$msdeploy = "C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe";
write-warning "WITHOUT SPACE"
$fl1 = "d:\nospace\a.txt"
$fl2 = "d:\nospace\b.txt"
$md = $("`"{0}`" -verb:sync -source:filePath=`"{1}`" -dest:filePath=`"{2}`"" -f $msdeploy, $fl1, $fl2)
cmd.exe /C "`"$md`""
write-warning "WITH SPACE"
$fl1 = "d:\space space\a.txt"
$fl2 = "d:\space space\b.txt"
$md = $("`"{0}`" -verb:sync -source:filePath=`"{1}`" -dest:filePath=`"{2}`"" -f $msdeploy, $fl1, $fl2)
cmd.exe /C "`"$md`""
A: When invoking commands PowerShell does some auto quoting that does not work well with MSDeploy. There are a couple of ways to avoid the auto quoting. One is to use the Start-Process cmdlet where you can specify the exact command line that you want but it can become a bit tedious to get the output of the new process to appear as output of the PowerShell script that you are running.
Another option is to use the --% specifier to turn off PowerShell parsing. However, doing that will not allow you to use variables in the command line because - well, parsing has been turned off. But you can get around this by using the Invoke-Expression cmdlet to first build the command line including the --% and whatever variables you want and then let PowerShell evaluate it:
$fl1 = "D:\space space\a.txt";
$fl2 = "D:\space space\b.txt";
$arguments = "-verb:sync -source:filePath=""$fl1"" -dest:filePath=""$fl2"""
$commandLine = 'msdeploy.exe --% ' + $arguments
Invoke-Expression $commandLine
A: I've found that this works:
$arguments=@(
"-verb:sync"
,"-source:metakey=lm/$IISSite,computername=$computer,includeAcls=true"
,"-dest:metakey=lm/w3svc/$DestSite"
,"-enableLink:appPool"
,"-allowUntrusted"
,"-skip:attributes.name=ServerBindings"
,"-skip:attributes.name=SecureBindings"
#,"-whatif"
)
Write-Output "Running MSDeploy with the following arguments"
$arguments
$logfile="Sync_$(get-date -format yyyyMMdd-HHmm).log"
Start-Process -FilePath "$msdeploy\msdeploy.exe" -ArgumentList $arguments -WorkingDirectory $msdeploy -RedirectStandardError "Error.txt" -RedirectStandardOutput $logfile -Wait -NoNewWindow
A: Found an easy solution. Ref: http://answered.site/all-arguments-must-begin-with--at-cwindowsdtldownloadswebserviceswebservicesidservicepublishedwebsitesidservicedeploymentidservicewsdeployps123/4231580/
$msdeploy = "C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
$msdeployArgs = @(
"-verb:sync",
"-source:iisApp='Default Web Site/HelloWorld'",
"-verbose",
"-dest:archiveDir='c:\temp1'"
)
Start-Process $msdeploy -NoNewWindow -ArgumentList $msdeployArgs
| |
doc_23538079
|
http://Siteurl:8080/solr/metro/select?q=*:*&rows=0&wt=json&indent=true&facet=true&facet.field=Make
But as result let suppose I have 'Ford Fiesta' in make field. I am getting two results instead of one as shown below :
Ford => 21
Fiesta => 21
It is seprating field by space.
I want it like
Ford Fiesta => 21
Please let me know the valid method to do so.
Thanks
A: The problem is very simple here. You are trying to facet on tokenized field (text). This means each token will be counted separately. I suggest you to add new field (in schema.xml file) which you will feed with the same data as field Make (eg. using copy field). This new field should be string or text with KeywordTokenizer.
Please look at the example below. I added there two types: string and text_not_tokenized. Then defined two fields Make_string and Make_nonTokenized. When you facet on each of them you should see "Ford Fiesta"
So you can just query
http://Siteurl:8080/solr/metro/select?q=*:*&rows=0&wt=json&indent=true&facet=true&facet.field=Make_string
or
http://Siteurl:8080/solr/metro/select?q=*:*&rows=0&wt=json&indent=true&facet=true&facet.field=Make_nonTokenized
.
...
<fieldType name="string" class="solr.StrField" sortMissingLast="true" />
<fieldType name="text_not_tokenized" class="solr.TextField">
<analyzer>
<tokenizer class="solr.KeywordTokenizerFactory"/>
</analyzer>
</fieldType>
...
<field name="Make_string" type="string">
<field name="Make_nonTokenized" type="text_not_tokenized">
....
| |
doc_23538080
|
import { https } from 'firebase-functions';
import express from "express";
import houseRoute from './src/houseRouter.js';
import userRoute from '../src/userRouter.js';
export const house = https.onRequest(houseRoute);
export const user = https.onRequest(userRoute);
houseRouter.js is below
import express from "express"
import request from 'request'
import {xml2json} from './common/common.js'
import {db, serviceKey} from './common/constant.js'
const router = express.Router();
router.post('/houseList', function(req, res, next){
let body = req.body;
getHouseList(res, body);
});
I don't know why I can't call with IP
Experts. please Help me
I try below
import { https } from 'firebase-functions';
import express from "express";
import houseRoute from './src/houseRouter.js';
import userRoute from './src/userRouter.js';
const app = express();
app.listen(5001, "0.0.0.0");
export const api = https.onRequest(app);
export const house = https.onRequest(houseRoute);
export const user = https.onRequest(userRoute);
but I still couldn't call with IP
| |
doc_23538081
|
btn is Control
btn <- ControlCreate(name,typButton,mouseX,mouseY,mouseXRel-mouseX,mouseYRel-mouseY,True)
btn..Caption = name
btn..Process[trtClick] = buttonAction
the buttonAction code is:
Info("You pressed: " + btn..Caption)
But the result of the buttonAction is always the last button name I create, for eg. I create a button named "Luca" and when I click it the result is: You pressed: Luca. Then i create a new button named "Antonio" but when I press "Luca" button the output is You pressed Antonio. How can I assign one button action runtime for every button?
A: Did you try to create a control per button ?
Where did you run the creation of the buttons ?
Like:
btn1 is Control
btn1 <- ControlCreate("test1",typButton,mouseX,mouseY,mouseXRel-mouseX,mouseYRel-mouseY,True)
btn1..Caption = "test1"
btn1..Process[trtClick] = buttonAction
btn2 is Control
btn2 <- ControlCreate("test2",typButton,mouseX,mouseY,mouseXRel-mouseX,mouseYRel-mouseY,True)
btn2..Caption = "test2"
btn2..Process[trtClick] = buttonAction
What result do you get ?
I don't have the last version of Windev for tests.
I'm afraid "btn" became shared to the windows when you use
<- ControlCreate
| |
doc_23538082
|
class Profile(models.Model):
user = OneToOneField(User, on_delete=models.CASCADE)
profile_type = models.CharField()
I want to make a django rest framework serializer which allows for creation of creation and retrieval of a User object's nested attributes as well as the "profile_type" attribute.
I want the names to be specified on the POST request as simply as "username", "password", "email", etc. - instead of "profile_username", "profile_password", ...
So far I have
class ProfileSerializer(serializers.ModelSerializer):
username = serializers.CharField(source='profile_user_username')
password = serializers.CharField(source='profile_user_password')
email = serializers.CharField(source='profile_user_email')
class Meta:
model = Profile
fields = ('id',
'profile_user_username', 'profile_user_password', 'profile_user_email',
'username',
'password',
'email')
depth = 2
But - I've been getting an error:
ImproperlyConfigured: Field name 'profile_user_username' is not valid for model 'Profile'
Am I getting the syntax for nested fields wrong? Or is it something else?
A: Try:
class User(models.Model):
username = models.CharField()
password = models.CharField()
email = models.CharField()
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
class ProfileSerializer(serializers.ModelSerializer):
user = UserSerializer(many=True)
class Meta:
model = Profile
fields = ('user', 'profile_type',)
def create(self, validated_data):
user_data = validated_data.pop('user')
user = User.objects.create(**user_data)
profile = Profile.objects.create(user=user, **validated_data)
return profile
and check out
this for Writable nested serializers.
For dealing a nested object:
checkout this this
| |
doc_23538083
|
Following is the flow of page redirection to simplify the issue.
*
*User 1 > Login > Dashbord > Create Employe > Employee List >Logout.
*User 2 > Login > Dashbord > (Press Back Button) Employee List > (Again Press Back Button) Create Employee > (Again Press Back Button) Dashboard.
Above page redirection flows right if login with same user (User 1).
May be this issue can solve using the spring web flow, but how to use spring web flow.?
Anybody can help me how to handle the back button issue.????
A: You may try removing history on logout and setting up cache as given in below URL.
How Disable Browser Back Button only after Logout in mvc3.net
| |
doc_23538084
|
I'm playing with a small web application to store attendance on a daily basis, and report on it based on month and year.
this is the attendance collection on DB :
{
_id : 5f3b7f85a189d04eec4ec2e8
dated :2020-03-18T12:01:25.348+00:00
empId:"10013"
employee:5f2b66620ec17b4b1034549a
weekOff:false
inTime:2020-08-18T12:01:34.308+00:00
outTime:2020-08-18T12:10:34.308+00:00
present:true
startLate:true
leaveEarly:true
} ........
how do I get statistics like this :
{
month : 01,
year : 2020,
present : 75 %
absent : 25%
startLate : 10%
leaveEarly: 25%
},
{
month : 02,
year : 2020,
present : 80 %
absent : 22%
startLate : 20%
leaveEarly: 05%
}, ...
I was trying but unable to get it right
A: Start by destructuring the date into its constituent parts using the $dateToParts operator.
After that group based on month and year, accumulate all present, startLate and leaveEarly and the count too.
After grouping, project the required fields and calculate the percentage.
Here's a fiddle of the below.
var pipeline = [
{
$addFields: {
date: {
$dateToParts: {
date: "$dated"
}
}
}
},
{
$group: {
_id: {
month: "$date.month",
year: "$date.year"
},
sum: {
$sum: 1
},
present: {
$sum: {
$cond: {
if: { $eq: ['$present', true] },
then: 1,
else: 0
}
}
},
absent: {
$sum: {
$cond: {
if: { $eq: ['$present', false] },
then: 1,
else: 0
}
}
},
startLate: {
$sum: {
$cond: {
if: { $eq: ['$startLate', true] },
then: 1,
else: 0
}
}
},
leaveEarly: {
$sum: {
$cond: {
if: { $eq: ['$leaveEarly', true] },
then: 1,
else: 0
}
}
}
}
},
{
$project: {
month: '$id.month',
year: '$id.year',
"present": {
$multiply: [
{ $divide: ["$present", "$sum"] },
100
]
},
"absent": {
$multiply: [
{ $divide: ["$absent", "$sum"] },
100
]
},
"startLate": {
$multiply: [
{ $divide: ["$startLate", "$sum"] },
100
]
},
"leaveEarly": {
$multiply: [
{ $divide: ["$leaveEarly", "$sum"] },
100
]
}
}
}
];
db.collection.aggregate(pipeline);
| |
doc_23538085
|
when i render with datatables serverside, it's take 13 secons. even when I filtering it.
I don't know why it's take too long...
here my sql query
CREATE VIEW `vw_cashback` AS
SELECT
`tb_user`.`nik` AS `nik`,
`tb_user`.`full_name` AS `nama`,
`tb_ms_location`.`location_name` AS `lokasi`,
`tb_transaction`.`date_transaction` AS `tanggal_setor`,
sum(CASE WHEN `tb_transaction_detail`.`vehicle_type`=1 THEN 1 ELSE 0 END) AS `mobil`,
sum(CASE WHEN `tb_transaction_detail`.`vehicle_type`=2 THEN 1 ELSE 0 END) AS `motor`,
sum(CASE WHEN `tb_transaction_detail`.`vehicle_type`=3 THEN 1 ELSE 0 END) AS `truck`,
sum(CASE WHEN `tb_transaction_detail`.`vehicle_type`=4 THEN 1 ELSE 0 END) AS `speda`,
sum(`tb_transaction_detail`.`total`) AS `total_global`,
(sum(`tb_transaction_detail`.`total`) * 0.8) AS `total_user`,
(sum(`tb_transaction_detail`.`total`) * 0.2) AS `total_tgr`,
((sum(`tb_transaction_detail`.`total`) * 0.2) / 2) AS `total_cashback`,
(curdate() - cast(`tb_user`.`created_at` AS date)) AS `status`
FROM `tb_user`
JOIN `tb_transaction` ON `tb_user`.`id` = `tb_transaction`.`user_id`
JOIN `tb_transaction_detail` ON `tb_transaction`.`id` = `tb_transaction_detail`.`transaction_id`
JOIN `tb_ms_location` ON `tb_ms_location`.`id` = `tb_transaction`.`location_id`
GROUP BY
`tb_user`.`id`,
`tb_transaction`.`date_transaction`,
`tb_user`.`nik`,
`tb_user`.`full_name`,
`tb_user`.`created_at`,
`tb_ms_location`.`location_name`
thanks
A: The unfiltered query must be slow, because it takes all records from all tables, joins and aggregates them.
But you say the view is still slow when you filter. The question is: How do you filter? As you are aggregating by user, location and transaction date, it should be one of these. However, you don't have the user ID or the transaction ID in your result list. This doesn't feel natural and I'd suggest you add them, so a query like
select * from vw_cashback where user_id = 5
or
select * from vw_cashback where transaction_id = 12345
would be possible.
As is, you'd have to filter by location name or user nik / name. So if you want it thus, then create Indexes for the lookup:
CREATE idx_location_name ON tb_ms_location(location_name, id)
CREATE idx_user_name ON tb_user(full_name, id)
CREATE idx_user_nik ON tb_user(nik, id)
The latter two can even be turned into covering indexs (i.e. indexes containing all columns used in the query) that may still speed up the process:
CREATE idx_user_name ON tb_user(nik, id, full_name, created_at);
CREATE idx_user_nik ON tb_user(full_name, id, nik, created_at);
As for the access via index, you also may want covering indexes:
CREATE idx_location_id ON tb_ms_location(id, location_name)
CREATE idx_user_id ON tb_user(id, nik, full_name, created_at);
| |
doc_23538086
|
I'm aware of algorithms for directly rendering CSG shapes, but I want to convert it into a wireframe mesh just once so that I can render it "normally"
To add a little more detail. Given a description of a shape such as "A cube here, intersection with a sphere here, subtract a cylinder here" I want to be able to calculate a polygon mesh.
A: These libraries seems to do what you want:
www.solidgraphics.com/SolidKit/
carve-csg.com/
gts.sourceforge.net/
A: See also "Constructive Solid Geometry for Triangulated Polyhedra" (1990) Philip M. Hubbard doi:10.1.1.34.9374
A: There are two main approaches. If you have a set of polygonal shapes, it is possible to create a BSP tree for each shape, then the BSP trees can be merged. From Wikipedia,
1990 Naylor, Amanatides, and Thibault
provide an algorithm for merging two
bsp trees to form a new bsp tree from
the two original trees. This provides
many benefits including: combining
moving objects represented by BSP
trees with a static environment (also
represented by a BSP tree), very
efficient CSG operations on polyhedra,
exact collisions detection in O(log n
* log n), and proper ordering of transparent surfaces contained in two
interpenetrating objects (has been
used for an x-ray vision effect).
The paper is found here Merging BSP trees yields polyhedral set operations.
Alternatively, each shape can be represented as a function over space (for example signed distance to the surface). As long as the surface is defined as where the function is equal to zero, the functions can then be combined using (MIN == intersection), (MAX == union), and (NEGATION = not) operators to mimic the set operations. The resulting surface can then be extracted as the positions where the combined function is equal to zero using a technique like Marching Cubes. Better surface extraction methods like Dual Marching Cubes or Dual Contouring can also be used. This will, of course, result in a discrete approximation of the true CSG surface. I suggest using Dual Contouring, because it is able to reconstruct sharp features like the corners of cubes .
A: Here are some Google Scholar links which may be of use.
From what I can tell of the abstracts, the basic idea is to generate a point cloud from the volumetric data available in the CSG model, and then use some more common algorithms to generate a mesh of faces in 3D to fit that point cloud.
Edit: Doing some further research, this kind of operation is called "conversion from CSG to B-Rep (boundary representation)". Searches on that string lead to a useful PDF:
http://www.scielo.br/pdf/jbsmse/v29n4/a01v29n4.pdf
And, for further information, the key algorithm is called the "Marching Cubes Algorithm". Essentially, the CSG model is used to create a volumetric model of the object with voxels, and then the Marching Cubes algorithm is used to create a 3D mesh out of the voxel data.
A: You could try to triangulate (tetrahedralize) each primitive, then perform the boolean operations on the tetrahedral mesh, which is "easier" since you only need to worry about tetrahedron-tetrahedron operations. Then you can perform boundary extraction to get the B-rep. Since you know the shapes of your primitives analytically, you can construct custom tetrahedralizations of your primitives to suit your needs instead of relying on a mesh generation library.
For example, suppose your object was the union of a cube and a cylinder, and suppose you have a tetrahedralization of both objects. In order to compute the boundary representation of the resulting object, you first label all the boundary facets of the tetrahedra of each primitive object. Then, you perform the union operation: if two tetrahedra are disjoint, then nothing needs to be done; both tetrahedra must exist in the resulting polyhedron. If they intersect, then there are a number of cases (probably on the order of a dozen or so) that need to be handled. In each of these cases, the volume of the two tetrahedra needs to be re-triangulated in a way that respects the surface constraints. This is made somewhat easier by the fact that you only need to worry about tetrahedra, as opposed to more complicated shapes. The boundary facet labels need to be maintained in the process so that in the final collection of tetrahedra, the boundary facets can be extracted to form a triangle mesh of the surface.
A: I've had some luck with the BRL-CAD application MGED where I can construct a convex polyhedron by intersecting planes using CSG then extract the boundary representation using the command-line g-stl command. Check http://brlcad.org/
Malcolm
A: If you can convert you input primitives to polyhedral meshes then you could use libigl's C++ mesh boolean routines. The following computes the union of a mesh (VA,FA) and another mesh (VB,FB):
igl::mesh_boolean(VA,FA,VB,FB,"union",VC,FC);
where VA is a #VA by 3 matrix of vertex positions and FA is a #FA by 3 matrix of triangle indices into VA, and so on. The technique used in libigl is different from those two mentioned in Joe's answer. All pairs of triangles are intersected against each other (using spatial acceleration) and then resulting sub-triangles are categorized as belonging to the output surface or not.
| |
doc_23538087
|
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:padding="24dp"
android:id="@+id/question_text_view"/>
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:src="@mipmap/img1"
android:padding="24dp"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical|center_horizontal"
android:orientation="horizontal">
<Button
android:id="@+id/trueButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/true_button" />
<Button
android:id="@+id/falseButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/false_button" />
</LinearLayout>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<ImageButton
android:id="@+id/prev_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/next_button"
android:drawableRight="@drawable/arrow_left"
android:drawablePadding="4dp"
android:src="@drawable/arrow_left"
android:contentDescription="arrowLeft"/>
<ImageButton
android:id="@+id/next_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/next_button"
android:drawableRight="@drawable/arrow_right"
android:drawablePadding="4dp"
android:src="@drawable/arrow_right"
android:contentDescription="arrowRight"/>
</LinearLayout>
</FrameLayout>
What I'm trying to do is to get the pre and next ImageButtons underneath the trueButton and falseButton but I can't get it to work. I tried:
android:layout_gravity="bottom|right"
But that has no effect on it whatsoever. I'm required to use a FrameLayout for this.
This is what it currently displays as when I run:
Any help would be greatly appreciated. Thank you!
Katie
A: I don't think FrameLayout is the right tool for this job; its capabilities are relatively limited for arranging multiple children in relation to each other. You'd generally be better off using something like a ConstraintLayout to accomplish this.
If, however, FrameLayout is a requirement, then I think the only thing you can do is wrap both the true/false and the prev/next buttons in a vertical LinearLayout, and then center that in the FrameLayout.
<FrameLayout>
<LinearLayout
android:layout_gravity="center"
android:orientation="vertical"
android:gravity="center_horizontal">
<LinearLayout
android:orientation="horizontal">
<Button/>
<Button/>
</LinearLayout>
<LinearLayout
android:orientation="horizontal">
<Button/>
<Button/>
</LinearLayout>
</LinearLayout>
</FrameLayout>
A: Add a linear layout below your frameLayout
Try this:
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
android:layout_gravity="center">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:padding="24dp"
android:id="@+id/question_text_view"/>
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:src="@mipmap/img1"
android:padding="24dp"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical|center_horizontal"
android:orientation="horizontal">
<Button
android:id="@+id/trueButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/true_button" />
<Button
android:id="@+id/falseButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/false_button" />
</LinearLayout>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center">
<ImageButton
android:id="@+id/prev_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/next_button"
android:drawableRight="@drawable/arrow_left"
android:drawablePadding="4dp"
android:src="@drawable/arrow_left"
android:contentDescription="arrowLeft"/>
<ImageButton
android:id="@+id/next_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/next_button"
android:drawableRight="@drawable/arrow_right"
android:drawablePadding="4dp"
android:src="@drawable/arrow_right"
android:contentDescription="arrowRight"/>
</LinearLayout>
</LinearLayout>
A: Try this:
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:padding="24dp"
android:id="@+id/question_text_view"/>
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:src="@mipmap/img1"
android:padding="24dp"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_vertical|center_horizontal"
android:orientation="vertical">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal">
<Button
android:id="@+id/trueButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/true_button" />
<Button
android:id="@+id/falseButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/false_button" />
</LinearLayout>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal">
<ImageButton
android:id="@+id/prev_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/next_button"
android:drawableRight="@drawable/arrow_left"
android:drawablePadding="4dp"
android:src="@drawable/arrow_left"
android:contentDescription="arrowLeft"/>
<ImageButton
android:id="@+id/next_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/next_button"
android:drawableRight="@drawable/arrow_right"
android:drawablePadding="4dp"
android:src="@drawable/arrow_right"
android:contentDescription="arrowRight"/>
</LinearLayout>
</LinearLayout>
</FrameLayout>
A: Try this:
I am using frame layout like this
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:padding="24dp"
android:id="@+id/question_text_view"/>
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:src="@mipmap/ic_launcher"
android:padding="24dp"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="bottom|center_horizontal"
android:orientation="vertical"
android:layout_marginBottom="10dp">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="15dp"
android:orientation="horizontal">
<Button
android:id="@+id/true_Button"
android:layout_width="wrap_content"
android:textAllCaps="false"
android:layout_marginRight="10dp"
android:layout_height="wrap_content"
android:text="true" />
<Button
android:id="@+id/false_Button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAllCaps="false"
android:layout_marginRight="5dp"
android:text="False" />
</LinearLayout>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center">
<ImageButton
android:id="@+id/prev_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:contentDescription="arrowLeft"/>
<ImageButton
android:id="@+id/next_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:drawablePadding="4dp"
android:contentDescription="arrowRight"/>
</LinearLayout>
</LinearLayout>
i am using this line to change gravity
android:layout_gravity="bottom|center_horizontal"
like this:
| |
doc_23538088
|
Currently I am running a simple rewrite
RewriteCond %{HTTP_REFERER} !^http://www.example.com/this-page.html$
RewriteRule this-download-page.html - [F,NC]
This works great in everything other than ie11 and down.
Any ideas what I am doing wrong?
| |
doc_23538089
|
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: pcre-devel
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: libtk8.5.so()(64bit)
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: texinfo-tex
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: tk-devel
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: libjpeg.so.62(LIBJPEG_6.2)(64bit)
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: tex(latex)
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: tex(dvips)
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: tex(latex)
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: tcl-devel
When I try to install a package mentioned in the error, it gives error showing another package from same list, and ultimately forms a cycle.
The whole log is:
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
Updating certificate-based repositories.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package R.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: libRmath-devel = 3.0.2-1.el6 for package: R-3.0.2-1.el6.x86_64
--> Processing Dependency: R-java = 3.0.2-1.el6 for package: R-3.0.2-1.el6.x86_64
---> Package R-core.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: tex(latex) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: tex(dvips) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: libjpeg.so.62(LIBJPEG_6.2)(64bit) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: libtk8.5.so()(64bit) for package: R-core-3.0.2-1.el6.x86_64
---> Package R-core-devel.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: tk-devel for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: texinfo-tex for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: tex(latex) for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: tcl-devel for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: pcre-devel for package: R-core-devel-3.0.2-1.el6.x86_64
---> Package R-devel.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: R-java-devel = 3.0.2-1.el6 for package: R-devel-3.0.2-1.el6.x86_64
--> Running transaction check
---> Package R-core.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: tex(latex) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: tex(dvips) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: libjpeg.so.62(LIBJPEG_6.2)(64bit) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: libtk8.5.so()(64bit) for package: R-core-3.0.2-1.el6.x86_64
---> Package R-core-devel.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: tk-devel for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: texinfo-tex for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: tex(latex) for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: tcl-devel for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: pcre-devel for package: R-core-devel-3.0.2-1.el6.x86_64
---> Package R-java.x86_64 0:3.0.2-1.el6 will be installed
---> Package R-java-devel.x86_64 0:3.0.2-1.el6 will be installed
---> Package libRmath-devel.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: libRmath = 3.0.2-1.el6 for package: libRmath-devel-3.0.2-1.el6.x86_64
--> Running transaction check
---> Package R-core.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: tex(latex) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: tex(dvips) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: libjpeg.so.62(LIBJPEG_6.2)(64bit) for package: R-core-3.0.2-1.el6.x86_64
--> Processing Dependency: libtk8.5.so()(64bit) for package: R-core-3.0.2-1.el6.x86_64
---> Package R-core-devel.x86_64 0:3.0.2-1.el6 will be installed
--> Processing Dependency: tk-devel for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: texinfo-tex for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: tex(latex) for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: tcl-devel for package: R-core-devel-3.0.2-1.el6.x86_64
--> Processing Dependency: pcre-devel for package: R-core-devel-3.0.2-1.el6.x86_64
---> Package libRmath.x86_64 0:3.0.2-1.el6 will be installed
--> Finished Dependency Resolution
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: pcre-devel
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: libtk8.5.so()(64bit)
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: texinfo-tex
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: tk-devel
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: libjpeg.so.62(LIBJPEG_6.2)(64bit)
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: tex(latex)
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: tex(dvips)
Error: Package: R-core-3.0.2-1.el6.x86_64 (epel)
Requires: tex(latex)
Error: Package: R-core-devel-3.0.2-1.el6.x86_64 (epel)
Requires: tcl-devel
You could try using --skip-broken to work around the problem
** Found 9 pre-existing rpmdb problem(s), 'yum check' output follows:
dvd+rw-tools-7.1-5.el6.x86_64 is a duplicate with dvd+rw-tools-7.1-5blocks.el6.x86_64
gvfs-1.4.3-15.el6.x86_64 is a duplicate with gvfs-1.4.3-12.el6.x86_64
gvfs-afc-1.4.3-15.el6.x86_64 is a duplicate with gvfs-afc-1.4.3-12.el6.x86_64
gvfs-archive-1.4.3-15.el6.x86_64 is a duplicate with gvfs-archive-1.4.3-12.el6.x86_64
gvfs-devel-1.4.3-15.el6.x86_64 is a duplicate with gvfs-devel-1.4.3-12.el6.x86_64
gvfs-fuse-1.4.3-15.el6.x86_64 is a duplicate with gvfs-fuse-1.4.3-12.el6.x86_64
gvfs-gphoto2-1.4.3-15.el6.x86_64 is a duplicate with gvfs-gphoto2-1.4.3-12.el6.x86_64
gvfs-obexftp-1.4.3-15.el6.x86_64 is a duplicate with gvfs-obexftp-1.4.3-12.el6.x86_64
gvfs-smb-1.4.3-15.el6.x86_64 is a duplicate with gvfs-smb-1.4.3-12.el6.x86_64
| |
doc_23538090
|
Reference image created using paint.net software. I drew a line to split the text and filled the bottom part with a different texture.
*I don't want the line to be visible in the final output.
A: Possible.
*
*Fill the path with the solid brush.
*Get the rectangle that bounds the path through the GraphicsPath.GetBounds method.
*Call the Graphics.SetClip method to exclude the top half of the rectangle.
*Fill the path with a TextureBrush or HatchBrush.
An example that uses a HatchBrush to fill the second vertical half of the path.
private void SomeControl_Paint(object sender, PaintEventArgs e)
{
var g = e.Graphics;
var r = (sender as Control).ClientRectangle;
using (var gp = new GraphicsPath())
using (var sf = new StringFormat())
using (var fnt = new Font("Blackoak Std", 72))
using (var hbr = new HatchBrush(HatchStyle.Percent25, Color.White, Color.Red))
{
sf.Alignment = sf.LineAlignment = StringAlignment.Center;
gp.AddString("RED", fnt.FontFamily, (int)fnt.Style, GetEmFontSize(fnt), r, sf);
g.SmoothingMode = SmoothingMode.AntiAlias;
g.FillPath(Brushes.Red, gp);
var rf = gp.GetBounds();
rf.Height /= 2f;
g.SetClip(rf, CombineMode.Exclude);
g.FillPath(hbr, gp);
g.ResetClip();
g.SmoothingMode = SmoothingMode.None;
}
}
private float GetEmFontSize(Font fnt) =>
fnt.SizeInPoints * (fnt.FontFamily.GetCellAscent(fnt.Style) +
fnt.FontFamily.GetCellDescent(fnt.Style)) / fnt.FontFamily.GetEmHeight(fnt.Style);
See also the other HatchStyle values.
| |
doc_23538091
|
My all functions are working except the delAt function. Whenever I execute this function, my program hangs. Another problem is, if I want to execute the addAt function more than once, that means if I choose "option 1" more than one time, the program hangs. Although it doesn't hang when I select "option 1" only 1 time.
Following is the code:
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
#include<string.h>
#include<dos.h>
struct stud
{
int rn, id, ph[15];
char add[30], na[20], d[15], in[5];
struct stud *next;
} *h = NULL, *p, *q, *t, *ts;
void add()
{
p =(struct stud*)malloc(sizeof(struct stud*));
printf("\nEnter the Initials of Student : ");
scanf("%s", &p->in);
printf("\nEnter the Last Name of Student : ");
scanf("%s", &p->na);
printf("\nEnter the ID of Student : ");
scanf("%d", &p->id);
printf("\nEnter the Roll No. of Student : ");
scanf("%d", &p->rn);
printf("\nEnter the Ph No. of Student : ");
scanf("%d", &p->ph);
printf("\nEnter the Address of Student : ");
scanf("%s", &p->add);
printf("\nEnter the D.O.B. of Student(dd/mm/yyyy) : ");
scanf("%s", &p->d);
p->next = NULL;
if (h == NULL)
{
h=p;
}
else
{
q = h;
while (q->next != NULL)
{
q = q->next;
}
q->next = p;
}
ts++;
}
void delAt(int r)
{
q=h;
r=x;
if (q == NULL)
{
printf("list is empty");
}
while (q->rn != r - 1)
{
q = q->next;
}
p = q->next;
q->next = p->next;
free(p);
printf("\n\nRecord Deleted.");
}
void main()
{
int ch = 0, r;
char ni[5];
while(ch != 8)
{
printf("1.Add the Record.\n\n2.Add Record at Locn.\n\n3.Delete Record.");
printf("\n\n4.Modify Record.\n\n5.Search Record.\n\n6.Sort Records.");
printf("\n\n7.Display\n\n8.Exit");
printf("\n\nEnter the Choice: ");
scanf("%d",&ch);
switch(ch)
{
case 1:
add();
break;
case 3:
printf("\nEnter the Roll No. : ");
scanf("%d",&r);
delAt(r);
break;
case 7:
disp();
break;
}
ch++;
}
}
A: the posted code does not cleanly compile! Please correct and repost.
Here is a list of the messages output by the C compiler:
gcc -ggdb -Wall -Wextra -Wconversion -pedantic -std=gnu11 -c "untitled.c" (in directory: /home/richard/Documents/forum)
untitled.c: In function ‘add’:
untitled.c:18:13: warning: format ‘%s’ expects argument of type ‘char *’, but argument 2 has type ‘char (*)[5]’ [-Wformat=]
scanf("%s", &p->in);
~^ ~~~~~~
untitled.c:20:13: warning: format ‘%s’ expects argument of type ‘char *’, but argument 2 has type ‘char (*)[20]’ [-Wformat=]
scanf("%s", &p->na);
~^ ~~~~~~
untitled.c:26:13: warning: format ‘%d’ expects argument of type ‘int *’, but argument 2 has type ‘int (*)[15]’ [-Wformat=]
scanf("%d", &p->ph);
~^ ~~~~~~
untitled.c:28:13: warning: format ‘%s’ expects argument of type ‘char *’, but argument 2 has type ‘char (*)[30]’ [-Wformat=]
scanf("%s", &p->add);
~^ ~~~~~~~
untitled.c:30:13: warning: format ‘%s’ expects argument of type ‘char *’, but argument 2 has type ‘char (*)[15]’ [-Wformat=]
scanf("%s", &p->d);
~^ ~~~~~
untitled.c: In function ‘delAt’:
untitled.c:53:7: error: ‘x’ undeclared (first use in this function)
r=x;
^
untitled.c:53:7: note: each undeclared identifier is reported only once for each function it appears in
untitled.c: At top level:
untitled.c:70:6: warning: return type of ‘main’ is not ‘int’ [-Wmain]
void main()
^~~~
untitled.c: In function ‘main’:
untitled.c:94:17: warning: implicit declaration of function ‘disp’; did you mean ‘div’? [-Wimplicit-function-declaration]
disp();
^~~~
div
untitled.c:73:10: warning: unused variable ‘ni’ [-Wunused-variable]
char ni[5];
^~
Compilation failed.
| |
doc_23538092
|
This is how I'm calling ktor to serve web-pages:
suspend fun main() = coroutineScope<Unit> {
System.setProperty(SimpleLogger.DEFAULT_LOG_LEVEL_KEY, "TRACE")
embeddedServer(
Netty,
port = 80,
module = Application::module
).apply { start(wait = true) }
}
data class Res(val topbar: String)
fun Application.module() {
install(KoreanderFeature)
routing {
get("/") {
val text = javaClass.getResource("/templates/topbar.kor").readText()
val resource = Res(Koreander().render(text, Any()))
call.respondKorRes("/templates/index.kor", resource)
}
}
}
This is my topbar.kor file: This is just for testing so i make it just small.
.navbar-custom
%ul.list-unstyled.topnav-menu.float-right.mb-0
%li.d-none.d-sm-block
%form.app-search
This is my index.kor file:
%html
%head
%body
%p.hello Hello World!
$topbar / <- problem here, this has escaped html, browser interpret it as text
This is what rendered on the browser:
How do I unescape the html here so that browser interpret it as a valid html and render it on there?
A: I got answer from David Eriksson @ official kotlinlang-slack
We can use :unsafehtml filter to effectively bypass HTML escaping:
%html
%head
%body
%p.hello Hello World!
:unsafehtml $topbar
| |
doc_23538093
|
Each edge has some weight. I want to find all equal paths, which starts in each vertex.
In other words, I want to get all tuples (v1, v, v2) where v1 and v2 are an arbitrary ancestor and descendant such that c(v1, v) = c(v, v2).
Let edges have the following weights (it is just example):
a-b = 3
b-c = 1
c-d = 1
d-e = 1
Then:
*
*The vertex A does not have any equal path (there is no vertex from left side).
*The vertex B has one equal pair. The path B-A equals to the path B-E (3 == 3).
*The vertex C has one equal pair. The path B-C equals to the path C-D (1 == 1).
*The vertex D has one equal pair. The path C-D equals to the path D-E (1 == 1).
*The vertex E does not have any equal path (there is no vertex from right side).
I implement simple algorithm, which works in O(n^2). But it is too slow for me.
A: You write, in comments, that your current approach is
It seems, I looking for a way to decrease constant in O(n^2). I choose
some vertex. Then I create two set. Then I fill these sets with
partial sums, while iterating from this vertex to start of tree and to
finish of tree. Then I find set intersection and get number of paths
from this vertex. Then I repeat algorithm for all other vertices.
There is a simpler and, I think, faster O(n^2) approach, based on the so called two pointers method.
For each vertix v go at the same time into two possible directions. Have one "pointer" to a vertex (vl) moving in one direction and another (vr) into another direction, and try to keep the distance from v to vl as close to the distance from v to vr as possible. Each time these distances become equal, you have equal paths.
for v in vertices
vl = prev(v)
vr = next(v)
while (vl is still inside the tree)
and (vr is still inside the tree)
if dist(v,vl) < dist(v,vr)
vl = prev(vl)
else if dist(v,vr) < dist(v,vl)
vr = next(vr)
else // dist(v,vr) == dist(v,vl)
ans = ans + 1
vl = prev(vl)
vr = next(vr)
(By precalculating the prefix sums, you can find dist in O(1).)
It's easy to see that no equal pair will be missed provided that you do not have zero-length edges.
Regarding a faster solution, if you want to list all pairs, then you can't do it faster, because the number of pairs will be O(n^2) in the worst case. But if you need only the amount of these pairs, there might exist faster algorithms.
UPD: I came up with another algorithm for calculating the amount, which might be faster in case your edges are rather short. If you denote the total length of your chain (sum of all edges weight) as L, then the algorithm runs in O(L log L). However, it is much more advanced conceptually and more advanced in coding too.
Firstly some theoretical reasoning. Consider some vertex v. Let us have two arrays, a and b, not the C-style zero-indexed arrays, but arrays with indexation from -L to L.
Let us define
*
*for i>0, a[i]=1 iff to the right of v on the distance exactly i there
is a vertex, otherwise a[i]=0
*for i=0, a[i]≡a[0]=1
*for i<0, a[i]=1 iff to the left of v on the distance exactly -i there is a vertex, otherwise a[i]=0
A simple understanding of this array is as follows. Stretch your graph and lay it along the coordinate axis so that each edge has the length equal to its weight, and that vertex v lies in the origin. Then a[i]=1 iff there is a vertex at coordinate i.
For your example and for vertex "b" chosen as v:
a--------b--c--d--e
--|--|--|--|--|--|--|--|--|-->
-4 -3 -2 -1 0 1 2 3 4
a: ... 0 1 0 0 1 1 1 1 0 ...
For another array, array b, we define the values in a symmetrical way with respect to origin, as if we have inverted the direction of the axis:
*
*for i>0, b[i]=1 iff to the left of v on the distance exactly i there
is a vertex, otherwise b[i]=0
*for i=0, b[i]≡b[0]=1
*for i<0, b[i]=1 iff to the right of v on the distance exactly -i there is a vertex, otherwise b[i]=0
Now consider a third array c such that c[i]=a[i]*b[i], asterisk here stays for ordinary multiplication. Obviously c[i]=1 iff the path of length abs(i) to the left ends in a vertex, and the path of length abs(i) to the right ends in a vertex. So for i>0 each position in c that has c[i]=1 corresponds to the path you need. There are also negative positions (c[i]=1 with i<0), which just reflect the positive positions, and one more position where c[i]=1, namely position i=0.
Calculate the sum of all elements in c. This sum will be sum(c)=2P+1, where P is the total number of paths which you need with v being its center. So if you know sum(c), you can easily determine P.
Let us now consider more closely arrays a and b and how do they change when we change the vertex v. Let us denote v0 the leftmost vertex (the root of your tree) and a0 and b0 the corresponding a and b arrays for that vertex.
For arbitrary vertex v denote d=dist(v0,v). Then it is easy to see that for vertex v the arrays a and b are just arrays a0 and b0 shifted by d:
a[i]=a0[i+d]
b[i]=b0[i-d]
It is obvious if you remember the picture with the tree stretched along a coordinate axis.
Now let us consider one more array, S (one array for all vertices), and for each vertex v let us put the value of sum(c) into the S[d] element (d and c depend on v).
More precisely, let us define array S so that for each d
S[d] = sum_over_i(a0[i+d]*b0[i-d])
Once we know the S array, we can iterate over vertices and for each vertex v obtain its sum(c) simply as S[d] with d=dist(v,v0), because for each vertex v we have sum(c)=sum(a0[i+d]*b0[i-d]).
But the formula for S is very simple: S is just the convolution of the a0 and b0 sequences. (The formula does not exactly follow the definition, but is easy to modify to the exact definition form.)
So what we now need is given a0 and b0 (which we can calculate in O(L) time and space), calculate the S array. After this, we can iterate over S array and simply extract the numbers of paths from S[d]=2P+1.
Direct application of the formula above is O(L^2). However, the convolution of two sequences can be calculated in O(L log L) by applying the Fast Fourier transform algorithm. Moreover, you can apply a similar Number theoretic transform (don't know whether there is a better link) to work with integers only and avoid precision problems.
So the general outline of the algorithm becomes
calculate a0 and b0 // O(L)
calculate S = corrected_convolution(a0, b0) // O(L log L)
v0 = leftmost vertex (root)
for v in vertices:
d = dist(v0, v)
ans = ans + (S[d]-1)/2
(I call it corrected_convolution because S is not exactly a convolution, but a very similar object for which a similar algorithm can be applied. Moreover, you can even define S'[2*d]=S[d]=sum(a0[i+d]*b0[i-d])=sum(a0[i]*b0[i-2*d]), and then S' is the convolution proper.)
| |
doc_23538094
|
I do not know the differences, but the following two functions give me the same results
all_shortest_paths(g, 1,3)
get.all.shortest.paths(g, 1,3)
Here is the outcome
$res
$res[[1]]
+ 3/9 vertices, from a86e634:
[1] 1 4 3
$res[[2]]
+ 3/9 vertices, from a86e634:
[1] 1 2 3
$nrgeo
[1] 1 1 2 1 1 0 1 1 1
Now, I want to get the nodes that are visited in a path without the source and sink nodes. For instance, I get the first shortest path.
> all_shortest_paths(g, 1,3)$res[1]
[[1]]
+ 3/9 vertices, from a86e634:
[1] 1 4 3`
How can I store the nodes that are visited excluding the source and sink nodes (i.e., 1,3)? When I assign a<- all_shortest_paths(g, 1,3)$res[1], its type seems like list, but no matter what I am doing, I cannot access 4. It keeps returning me + 3/9 vertices, from a86e634:
[1] 1 4 3
A: You need to go one more level down the list returned by all_shortest_paths. In the code below I create the variable n to make it more readable.
library(igraph)
g <- make_graph("Cubical")
p <- all_shortest_paths(g, 1, 3)
n <- length(p[[1]][[1]])
p[[1]][[1]][-c(1, n)]
#+ 1/8 vertex, from 0de75ff:
#[1] 4
To get all inner vertices in one go, use lapply on p[[1]].
lapply(p[[1]], function(.p){
n <- length(.p)
.p[-c(1, n)]
})
#[[1]]
#+ 1/8 vertex, from 0de75ff:
#[1] 4
#
#[[2]]
#+ 1/8 vertex, from 0de75ff:
#[1] 2
This code does not depend on the number of inner vertices, as can be seen if the source and sink are 1 and 7.
This time a one-liner.
(Output omitted.)
p2 <- all_shortest_paths(g, 1, 7)
lapply(p2[[1]], function(.p) .p[-c(1, length(.p))])
| |
doc_23538095
|
Below is my snippet code from my build.sbt that show how I am overriding the install directory. I have custom start script in src/templates directory for my scala fat jar app. When I remove the below install directory override, the RPM packages fine and install ok in /usr/share. Any help with this issue is greatly appreciated.
linuxPackageMappings in Rpm <<= (linuxPackageMappings) map { mappings =>
for(LinuxPackageMapping(filesAndNames, meta, zipped) <- mappings) yield {
val newFilesAndNames = for {
(file, installPath) <- filesAndNames
} yield file -> installPath.replaceFirst("/usr/share", "/opt")
LinuxPackageMapping(newFilesAndNames, meta, zipped)
}
}
A: I was able to solve this by removing the above code and just adding one liner to my build.sbt:
defaultLinuxInstallLocation:= "/opt"
| |
doc_23538096
|
function doGet(e)
{
var sheet = SpreadsheetApp.openById('Google Secret key').getSheetByName('Google');
var b1 = sheet.getSheetValues(e.parameters.n1, 23, 1 , e.parameters.n2);
return ContentService.createTextOutput(b1);
)
I want to execute this URL in Wordpress and display the result in content. I will appreciate if anybody help!
A: If you want to run your Apps Script code from a 3rd party application (like in your case because it's a webpage made by you), you need to use the Apps Script API and call the Method: scripts.run endpoint.
You can check the Browser Quickstart to learn how to set and call the API with a practical example. Taking it as a base and the instructions on how to execute Functions using Apps Script API I wrote some code to send some parameters to an Apps Script code and return any primitive variable type you want to.
<!DOCTYPE html>
<html>
<head>
<title>Google Apps Script API Quickstart</title>
<meta charset="utf-8" />
</head>
<body>
<p>Google Apps Script API Quickstart</p>
<!--Add buttons to initiate auth sequence and sign out-->
<button id="authorize_button" style="display: none;">Authorize</button>
<button id="signout_button" style="display: none;">Sign Out</button>
<pre id="content" style="white-space: pre-wrap;"></pre>
<script type="text/javascript">
// Client ID and API key from the Developer Console
var CLIENT_ID = 'YOUR_CLIENT_ID';
var API_KEY = 'YOUR_API_KEY';
// Array of API discovery doc URLs for APIs used by the quickstart
var DISCOVERY_DOCS = ["https://script.googleapis.com/$discovery/rest?version=v1"];
// Authorization scopes required by the API; multiple scopes can be
// included, separated by spaces.
var SCOPES = 'https://www.googleapis.com/auth/script.projects';
var authorizeButton = document.getElementById('authorize_button');
var signoutButton = document.getElementById('signout_button');
/**
* On load, called to load the auth2 library and API client library.
*/
function handleClientLoad() {
gapi.load('client:auth2', initClient);
}
/**
* Initializes the API client library and sets up sign-in state
* listeners.
*/
function initClient() {
gapi.client.init({
apiKey: API_KEY,
clientId: CLIENT_ID,
discoveryDocs: DISCOVERY_DOCS,
scope: SCOPES
}).then(function () {
// Listen for sign-in state changes.
gapi.auth2.getAuthInstance().isSignedIn.listen(updateSigninStatus);
// Handle the initial sign-in state.
updateSigninStatus(gapi.auth2.getAuthInstance().isSignedIn.get());
authorizeButton.onclick = handleAuthClick;
signoutButton.onclick = handleSignoutClick;
}, function(error) {
appendPre(JSON.stringify(error, null, 2));
});
}
/**
* Called when the signed in status changes, to update the UI
* appropriately. After a sign-in, the API is called.
*/
function updateSigninStatus(isSignedIn) {
if (isSignedIn) {
authorizeButton.style.display = 'none';
signoutButton.style.display = 'block';
callAppsScript();
} else {
authorizeButton.style.display = 'block';
signoutButton.style.display = 'none';
}
}
/**
* Sign in the user upon button click.
*/
function handleAuthClick(event) {
gapi.auth2.getAuthInstance().signIn();
}
/**
* Sign out the user upon button click.
*/
function handleSignoutClick(event) {
gapi.auth2.getAuthInstance().signOut();
}
/**
* Append a pre element to the body containing the given message
* as its text node. Used to display the results of the API call.
*
* @param {string} message Text to be placed in pre element.
*/
function appendPre(message) {
var pre = document.getElementById('content');
var textContent = document.createTextNode(message + '\n');
pre.appendChild(textContent);
}
/**
* Shows basic usage of the Apps Script API.
*
* Call the Apps Script API to create a new script project, upload files
* to the project, and log the script's URL to the user.
*/
function callAppsScript() {
gapi.client.script.scripts.run({
// Apps Script project id
scriptId: "your-script-id",
resource: {
// Function's name you want to run
function: "runFromExternalSource",
// Paramters you want to pass from Wordpress
parameters: [{"parameter1" : 123, "parameter2": "This is a test"}]
}
}).then((resp) => {
let result = resp.result;
if (result.error) throw result.error;
console.log(` This is the result`);
console.log(result);
}).catch((error) => {
// The API encountered a problem.
return console.log(error);
});
}
</script>
<script async defer src="https://apis.google.com/js/api.js"
onload="this.onload=function(){};handleClientLoad()"
onreadystatechange="if (this.readyState === 'complete') this.onload()">
</script>
</body>
</html>
A basic Apps Script function for testing would look like this:
function runFromExternalSource(event){
Logger.log(event.parameter1);
Logger.log(event.parameter2);
return event;
}
| |
doc_23538097
|
I want to refresh my datagridview if there are changes in a particular xml file. I got a FileSystemWatcher to look for any changes in the file and call the datagirdview function to reload the xml data.
When i tried, i'm getting Invalid data Exception error Somebody please tell what is the mistake am i doing here??
public Form1()
{
InitializeComponent();
FileSystemWatcher watcher = new FileSystemWatcher();
watcher.Path = @"C:\test";
watcher.Changed += fileSystemWatcher1_Changed;
watcher.EnableRaisingEvents = true;
//watches only Person.xml
watcher.Filter = "Person.xml";
//watches all files with a .xml extension
watcher.Filter = "*.xml";
}
private const string filePath = @"C:\test\Person.xml";
private void LoadDatagrid()
{
try
{
using (XmlReader xmlFile = XmlReader.Create(filePath, new XmlReaderSettings()))
{
DataSet ds = new DataSet();
ds.ReadXml(xmlFile);
dataGridView1.DataSource = ds.Tables[0]; //Here is the problem
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
private void Form1_Load(object sender, EventArgs e)
{
LoadDatagrid();
}
private void fileSystemWatcher1_Changed(object sender, FileSystemEventArgs e)
{
LoadDatagrid();
}
A: This is because the FileSystemWatcher runs on a distinct thread, not the UI thread. On winforms apps only the UI thread - the main thread of the program - can interact with visual constrols. If you need to interact with visual controls from another thread - like this case - you must call Invoke on the target control.
// this event will be fired from the thread where FileSystemWatcher is running.
private void fileSystemWatcher1_Changed(object sender, FileSystemEventArgs e)
{
// Call Invoke on the current form, so the LoadDataGrid method
// will be executed on the main UI thread.
this.Invoke(new Action(()=> LoadDatagrid()));
}
A: The FileSystemWatcher is running in a seperate thread and not in the UI thread. To maintain thread safety, .NET prevents you from updating the UI from the non-UI thread (i.e. the one that created the Form components).
To resolve the issue easily, call the MethodInvoker method of the target Form from your fileSystemWatcher1_Changed event. See MethodInvoker Delegate for more details on how to do this. There are other options on how to do this, incl. setting up a synchronized (i.e. thread-safe) object for holding the results/flag of any event, but this requires no changes to the Form code (i.e. in case of games, one could just poll the synchronized object in the main game loop etc).
private void fileSystemWatcher1_Changed(object sender, FileSystemEventArgs e)
{
// Invoke an anonymous method on the thread of the form.
this.Invoke((MethodInvoker) delegate
{
this.LoadDataGrid();
});
}
Edit: Corrected previous answer which had a problem within the delegate, the LoadDataGrid was missing this. and it would not resolve as such.
| |
doc_23538098
|
// The controller
angular.module('myApp').controller('ManageCtrl', function($scope, Restangular) {
$scope.delete = function(e) {
Restangular.one('product', e).remove();
};
Restangular.all('products').getList({}).then(function(data) {
$scope.products = data.products;
$scope.noOfPages = data.pages;
});
});
// The view
<li ng-repeat="product in products">
<a href="#" ng-click="delete(sheet._id)"></a>
</li>
I would also love to find an example of this - even with Angular resource. All the admin/data table demos seem to work from static data.
A: In my case the above didn't quite work. I had to do the following:
$scope.changes = Restangular.all('changes').getList().$object;
$scope.destroy = function(change) {
Restangular.one("changes", change._id).remove().then(function() {
var index = $scope.changes.indexOf(change);
if (index > -1) $scope.changes.splice(index, 1);
});
};
A: According to Restangular https://github.com/mgonto/restangular#restangular-methods they mention that you should use the original item and run an action with it, so in your html code you should:
<li ng-repeat="product in products">
<a href="#" ng-click="delete(product)"></a>
</li>
Then in your controller:
$scope.delete = function( product) {
product.remove().then(function() {
// edited: a better solution, suggested by Restangular themselves
// since previously _.without() could leave you with an empty non-restangular array
// see https://github.com/mgonto/restangular#removing-an-element-from-a-collection-keeping-the-collection-restangularized
var index = $scope.products.indexOf(product);
if (index > -1) $scope.products.splice(index, 1);
});
};
Notice they use the underscore.js without which will remove the element from the array. I guess that if they post that example in their readme page that means the .remove() function doesn't remove the original item from the collection. This makes sense, since not every item you remove you want removed from the collection itself.
Also, what happens if the DELETE $HTTP request fails? You don't want to remove the item then, and you have to make sure to handle that problem in your code.
| |
doc_23538099
|
so what I guess I am trying to achieve is something like this:
Git archive -o export iterateOverAllCommits EXPORTS_TO (first commit)archive0001.zip, (second commit)archive0002.zip…
After that it's no trouble to expand/prepare files for video.
A: By combining git archive and git rev-list with a little bash, you can do so.
COUNT=0
for commit in `git rev-list --reverse HEAD`; do
git archive $commit --format=zip -o archive$COUNT.zip
COUNT=$((COUNT + 1))
done
git rev-list --reverse HEAD prints out commit hashes starting with the first commit and ending with HEAD.
git archive $commit --format=zip -o archive$COUNT.zip creates a zip archive of the commit specified by the commit hash from rev-list.
Both rev-list and archive have a lot of options, which could help you further refine the archives to contain the information you need only.
Using printf you could easily modify the above to zero-pad the count.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.