/dpt/ - Daily Programming Thread

old thread: What are you working on, Cred Forums?

Other urls found in this thread:

github.com/VSCodeVim/Vim
pyos.github.io/dg/
en.wikipedia.org/wiki/Virtual_memory
gcc.gnu.org/onlinedocs/gcc-6.2.0/gcc/DEC-Alpha-Options.html
pastebin.com/4pS8UwcX
pastebin.com/evsyYdcW
gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
github.com/codepath/android_guides/wiki/Fragment-Navigation-Drawer
git.savannah.gnu.org/cgit/emacs.git/commit/?id=6133d656d0a62fa9b45d84d7f71862f577bf415c
git.savannah.gnu.org/cgit/emacs.git/commit/?id=97f35401771f782d5edc0dc6b841e054ca8685c3
github.com/KeckleKnight/Keckle-Bot
github.com/KeckleKnight/Keckle-BotI'm
en.wikipedia.org/wiki/Indent_style
kernel.org/doc/Documentation/CodingStyle
learncpp.com/
twitter.com/SFWRedditImages

...

Kill yourself, you degenerate.

HUR DURR OOP IN OOL!

Poor of those who still haven't learned about Go

4th for Go

Wow, that sure was an intelligent response!

It's always interesting that such an amazing paradigm is so hard to properly defend.

So THIS is why microkernels aren't mainstream.

Rate my compile-time assert!

#define static_assert(expr) typedef char STATIC_ASSERT_FAIL [(expr)?1:-1]
...
test.c: In function ‘wheeeee’:
include/one.h:42:43: error: size of array ‘STATIC_ASSERT_FAIL’ is negative
#define static_assert(expr) typedef char STATIC_ASSERT_FAIL [(expr)?1:-1]
^
test.c:35:2: note: in expansion of macro ‘static_assert’
static_assert((num % 2) == 0); /* must be even */
^

any favorite algorithm visualizations? i figure implementing some of these might make reviewing more fun

make trainees implement boids, given the algorithms

pretty shit considering you cant use more than one in scope

I think I'm finally getting into this VHDL thing.

that program looks historic

NOOOO!!!

Won't using it more than once cause it to complain about multiple definitions of the type STATIC_ASSERT_FAIL?

Why don't you try it?
It works multiple times in a row, even if all your asserts are failing.

C11 added proper static asserts, you know.
You don't need to use the -1 array size hack.

Stupid posts get stupid responses.

int64_t f(uint8_t n)
{
if (n 92) return -1;
n -= 2;

uint64_t a = 1, b = 1, c = 0, d = 1, e = 1, f = 0;

while (n) {
uint64_t g, h;
if (n & 1) {
g = a * d + b * e;
h = b * d + c * e;
f = b * e + c * f;
d = g;
e = h;
}
g = a * b + b * c;
a = a * a + b * b;
c = b * b + c * c;
b = g;
n >>= 1;
}

return d;
}


Did I do good, Cred Forums?

>can link math.h just fine
>can't link gsl
Wew lads I don't think even crossdressing will help with this one.

yes, your code is excellent

What the fuck is it doing?

You can't link header files.

Calculating d.

> >>=
:^)

calculate d-eez nutz

?

Thanks man. I added some comments to improve readability and to make it easier to maintain. , do you think this is better?

/**
* Calculate d.
*
* @param n n
* @return d on success, -1 on failure
*/
int64_t f(uint8_t n)
{
if (n 92) return -1;
n -= 2;

uint64_t a = 1, b = 1, c = 0, d = 1, e = 1, f = 0;

while (n) {
uint64_t g, h;
if (n & 1) {
g = a * d + b * e;
h = b * d + c * e;
f = b * e + c * f;
d = g;
e = h;
}
g = a * b + b * c;
a = a * a + b * b;
c = b * b + c * c;
b = g;
n >>= 1;
}

/* return d */
return d;
}


Oh, that's just a little trick I picked up from a friend who uses Haskell.

Are parallel arrays deprecated in Haskell?

can somebody tell me why it says that functions are in the type?
its supposed to be doubles and ints
{-# LANGUAGE FlexibleContexts #-}

import Data.List

whatever mrkd s trg@(x',y') (x,y)
= if onXnY
then if near_trg
then sum + trg_len
else if onEdge
then (-s)
else let new = [ (x-1, y), (x+1, y)
, (x, y-1), (x, y+1)
] \\ mrkd
in sum $ map (whatever (new ++ mrkd) (s+1) trg new
else if onX (x,y)
then let fx = fromIntegral $ floor x
cx = fromIntegral $ ceiling x
in whatever ((fx,y) : mrkd) (s+ x-fx) trg (fx, y)
+ whatever ((cx,y) : mrkd) (s+ cx-x) trg (cx, y)
else let fy = fromIntegral $ floor y
cy = fromIntegral $ ceiling y
in whatever ((x,fy) : mrkd) (s + y-fy) trg (x, fy)
+ whatever ((x,cy) : mrkd) (s + cy-y) trg (x, cy)
where onXnY = fromIntegral (floor x) == x
&& fromIntegral (floor y) == y
near_trg = x' > (x - 1) && x' < (x + 1)
|| y' > (y - 1) && y' < (y + 1)
trg_len = if onX trg
then abs (x - x')
else abs (y - y')
onX (a,_) = fromIntegral (floor a) == a
onEdge = x == 0 || y == 0 || x == 10 || y == 10


ghci
whatever ::
(Foldable t, RealFrac (t a -> a), Num a) =>
[(t a -> a, t a -> a)]
-> (t a -> a)
-> (t a -> a, t a -> a)
-> (t a -> a, t a -> a)
-> t a
-> a
-- Defined at city.hs:5:1

Annotate it with the type you want and you'll quickly find out.

github.com/VSCodeVim/Vim

You would think that after 1,166 commits, gg=G would work.

I think you might want to break this up a bit if possible

so this is the """self-documenting""" "efficient" ""high-performance"" """"minimal"""" hasklel code everyone's talking about

THIS, lel

Turns out FP is only good for writing quicksorts, who could have guessed.

Also, as said, break it up. There's a tonne of code duplication in there.

hey help me, is this calculus or some shit?

> Order the following functions by growth rate: N,

N, N1.5
, N2
, N log N, N log log N,
N log2 N, N log(N2
), 2/N, 2N , 2N/2
, 37, N2
log N, N3
. Indicate which functions grow at the
same rate.

No, it's not calculus.

Hope that helps.

Anyone here a make expert? I'm having some path issues during a linking process where my ide can find the header and source files but ld can't.

>doesn't know the asymptotic growth of those functions by heart
Rofl fucking brainlet.

This would be terrible code in any language.

Det er lett som et plett. Du trenger bare å skrive de funksjonene fra minste til störste.

Mspaint circuitry!

in haskell, is there a better way to see if a double is a of form n.0 where n can be any integer

al I've got fromIntegral (floor double) == double

Print it and check the string.

module anal_beads(a, b);

input a;
input b;
output y;

always @(a or b) begin
y = !a | !b;
end

endmodule

Can someone explain memory address assignment to me? I'm really confused about people distributing memory offsets and them working, why/how is a variable stored at the same offset on my machine as it is on someone else's machine? What determines the address of a variable either malloc'd or assigned automatically and why is this static?

In particular I've seen a lot of arbitrary code exploits in old video games or things like gameshark codes and I'm curious as to how that's possible. I understand the process of "read from this address, write to this address" but I don't understand why you don't have to find that address every time, it's just a consistent static offset. What governs this, the hardware, the OS, a language spec, binary formats?

pyos.github.io/dg/

>why/how is a variable stored at the same offset on my machine as it is on someone else's machine?
It's not. en.wikipedia.org/wiki/Virtual_memory

take the derivative

nerd

Because offsets are just that, offset. It means they're the difference between a target memory address and a base address. Given a certain base address, the stack will be at a fixed offset from that address (unless the system employs defensive mitigations such as ASLR). So the MMU assigns a distincct address space to each process, as a way to mimic absolute memory addresses while really using relative addresses behind the curtains.

[AVAIL_MEM: 0x0, 0x1 ... 0xn {PROC_MEM: 0x0, 0x1, ... 0xn}]

PRINT *0x0 == PROC_MEM *0x0

Interesting, thanks. I'm going to read up more about this and experiment.

you dont
deriviwhat?

>you dont
Lmao fucking idiot.

you are

Piss off retard pajeet rofl

you should

I hate using global variables,
How do I terminate this otherwise?
import subprocess
import time
silent = subprocess.STARTUPINFO()
silent.dwFlags |= subprocess.STARTF_USESHOWWINDOW
def run(cmd):
COM = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=silent)

return COM
print(run('ffmpeg -f gdigrab -framerate ntsc -i desktop lewd.webm').communicate())
time.sleep(5)
#COM.terminate()

A bunch of those numbers supposed to be superscript, aren't they?

I guess it's related to calculus, but normally you just kind of memorize them.

break;

That doesn't kill the subprocess,
?

its a joke

Watch out, the internet is serious business. user.

>What are you working on, Cred Forums?
Not my fucking homework apparently, because now I have thing.

for python should I make programs like

x = 10
y = 20
r = x + y
print r


or

r = x + y

and import my program/use it through shell?

Have CP on your RAM?
No worries, just run this function, pham.

It will ABSOLUTELY fuck your shit up.

This is the worst function I've ever written, I don't know what I was thngken.

task: make this tail recursive

impossible
its like fibonacci
it makes at most 4 calls to itself
this 4^n, not to mention the data it must store until all those subcalls die

Colourscheme?

I am having a hard time trying to learn C.

Should I read The C programming language? I could only find the second edition

pls respond

would .gif work better here?

precalculus growth curves of functions. you should have those memorized, but it's called big O notation.

c primer plus

Struggling with anything in particular?

Thats the one I am reading actually, I am on the structures chapter.

Its not that I am having a hard time, I just lack the practice.

To rephrase my question, once I finish this book, what other book should I read to solidify / strengthen my C knowledge?

The thing I struggled with the most (on the book I am reading) was on a chapter dedicated to processing letters and what not, formatting input, output, text, all of that was kind of complicated.

Name a boostrapped dependently typed language

>you can't

When a program is compiled the various components are expected to be at a set location in things like .text or .data. This is known at compile time and for speed is hard coded into the resulting binary. When a modern OS executes a program it uses a virtual memory manager to assign a fake memory range to that process, usually the entire available memory range (4GB on 32 bit, etc). The program doesn't actually use the physical RAM at the locations it thinks it is using, but rather the OS translates the memory address requests to map them from fake, process-specific memory range to the actual physical RAM (the CPU/MMU helps with this).

Offsets are distances to certain parts of code and not memory address targets for something like a subroutine/function or static data. In a compiled binary you can calculate the offset because you know the exact file size and therefore the location of all the bits. This is what position independent executable code does with relative addressing. Modern OSes support address space layout randomization so programs can be loaded to any arbitrary location and will still function normally. Virtual memory management is pretty much the same though.

Because the program gets it's own fake address space from the OS you can calculate offsets regardless of how the code is compiled or loaded, but often only across the same section of code. Some ASLR implementations load different parts of the program to different places in the fake address space.

isn't ATS bootstrapped?

Thanks for the breakdown.

>various components are expected to be at a set location in things like .text or .data.
I've seen this before when but never knew what they were or what they were related to, I'm assuming these are the components of a standard executable binary file like PE or ELF, is that so?

>This is what position independent executable code does with relative addressing.
I've heard of PIE before but never knew what it was for, I always assumed it had something to do with endianess. Binaries have to be compiled as PIE they can't just be executed as such, is that right?

All these things are lower than the level I understand so I'm not very familiar with them, I'd like to be eventually because it seems important to know. I feel weird having written a lot of programs without understanding basic file formats or how operating systems handle essential hardware.

Just look up some programming challenge websites, and do the problems in C.

>imperative dependently typed

I don't get it, what's the point of this?

post you're favourite g++ compiler flags

Total order on waifus

>Would you like to get sizeof()d
But sizeof is an operator?

Holy fuck that looks wonderful.

I hate HDL programming with a fucking passion. I wouldn't mind drag-and-drop legos.

but unary, so the term is correct

Yes, PE-COFF and ELF are the two big ones.

Endian has nothing to do with addressing; it's just what order the bits are in. Correct, a binary has to be compiled for PIE otherwise it will expect static memory addresses for the various components and the components might be loaded to a different fake address space.

-Wall -Wextra -Wpedantic -Werror -std=c++14

*different address in the fake memory space

g++ -mbuild-constants -Wall

Is this real js?

I have this piece of 16bit x86 bootloader code:
void disk_init(volatile uint8_t driveno)
{
check_int13h_ex(driveno);

asm goto(
"clc\n"
"mov ah, 0x48\n"
"mov dl, %0\n"
"mov si, %1\n"
"int 0x13\n"
"jc %l[fail]\n"
:
: "c" (driveno), "g" (¶ms)
: "ah", "dl", "cc", "memory"
: fail
);

if (2048 % params.blocksize != 0)
{
panic("disk block size not a factor of 2048");
}

params.driveno = driveno;
return;

fail:
panic("failed to retrieve drive parameters");
}


This works fine as it is, but as soon as I remove the volatile keyword from the argument list, driveno takes on a different value but the function seems to return normally without a panic, until later on in the bootloader it panics while failing to read a disk block.
With the volatile, everything is successful.

I know that volatile is supposed to inhibit optimizations but I can't figure out why the compiler thinks it can optimize this out.
I looked at the assembly side-by-side and I can't even figure out why volatile even fixes it, it should produce the exact same results but it doesn't.
I'm stumped.

CFLAGS=--std=c11 -c -O1 -ffreestanding -Wall -Wno-pointer-to-int-cast -masm=intel -march=i386 -m16

>it's real

gcc.gnu.org/onlinedocs/gcc-6.2.0/gcc/DEC-Alpha-Options.html

>mbuild-options
>Normally GCC examines a 32- or 64-bit integer constant to see if it can construct it from smaller constants in two or three instructions. If it cannot, it outputs the constant as a literal and generates code to load it from the data segment at run time.
>Use this option to require GCC to construct all integer constants using code, even if it takes more instructions (the maximum is six).
>You typically use this option to build a shared library dynamic loader. Itself a shared library, it must relocate itself in memory before it can find the variables and constants in its own data segment.

This is a very specific use case...

Also, this flag only seems to show up in Alpha compilers. Why are you targeting the DEC Alpha ISA?

post the asm output with/without volatile

>inline ASM
user, if you're going to do assembly shit, put it in a separate .s or .asm file and link it with your C code. It's much easier to not fuck things up.

Sure.

No volatile: pastebin.com/4pS8UwcX
Volatile: pastebin.com/evsyYdcW

>O1
>Applying any optimization flags to bootloader code without understanding anything

I suspect that if you read the entry for -O1 at gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html or whatever version of GCC you have, you'll find something in there that would interfere with your code. I suspect it's one of the ftree ot fssa options enabled.

Try compiling without any O flags at all or -O0.

-OU

I had it on -O0 to start with, it works fine without any optimizations of course.
It's just now that I wanted to enable optimizations to decrease code size which is what broke it.

Alternatively they could not use C for boot level code. There's absolutely zero chance at portability, and half of what you're writing is going to be inline assembly. You might as well just write the whole thing in pure assembly and not have to worry about any of this.

Wrong.
Writing a bootloader in C, especially when you have to read a filesystem and load a kernel is a whole lot easier in C than straight assembly.

boot code is a relic from the past, you don't need it anymore since uefi or grub.

I wouldn't think about it too much and just use volatile and just deal with it. Heck, I would probably even use -O2 with it.

But if you really want to figure out what is wrong, you can selectively omit optimizations from what -O1 has either by copy pasting the whole equivalent optimization options it enables and deleting some of the options, or just using some -fno options.

But as I said, it's probably one of the tree or ssa optimization passes that GCC uses that makes your bootloader crash. I doubt you are triggering UB which would be another explanation usually.

Im writing on an App using navigation drawer, which uses fragments to swtich between the views.
For the basics i followed this tutorial
>github.com/codepath/android_guides/wiki/Fragment-Navigation-Drawer

Now i have 2 problems
1.
>Open App
>starts with fragment A
>switch to fragment B
>turn phone from portrait mode to lanscape mode
>switches unintentional to fragment A

2.
>pic related
If i open the keyboard, the toolbar increases its size so i can see at max the 1st textview.

In the tutorial they replaced the action bar with a toolbar, because of interaction with the nav drawer.
I guess there is the problem, but i dont know where to look.

Any idea how that can happen and/or how to fix it?

okay fixed problem 1
was actually pretty easy, just had to add
>android:configChanges="keyboardHidden|orientation|screenSize"
to the activity tag in the manifest

first for go

Why does C still have the ']' symbol?
Isn't it completely useless since the array-subscript operator could just as well be only '['?
int *arr = {...};

int i = arr[num];
int i = num[arr];
/* ^since those two are identical, dont you just need */
int i = arr (operator) num;
int i = num (operator) arr;
Wouldn't a single char ('[' or ']' or something else entirely) be enough for that operator?
like
int i = arr§num;
int i = num§arr;

Honestly asking myself why the array subscript operator got opening and closing brackets to begin with, seems like a big design error to me...

Because accessing an index with anything more than a single identifier would force you to put parenthesis or other enclosing statements along with the identifier.

you're serious?

a + b[c + d] + e

Now you know why we have the ']'.

Coming to C from javascript and php. Any ideas I can come up with are either too easy or too complicated to make.

Sometimes my fingers tingle when I code in certain languages.

Enjoy your 10000 rows code kid.

I get that there will be code looking like this which might not "look nice" which is probably just because we aren't accustomed to it.
a + b§(c + d) + e

But since the array-subscript is bi-directional there isn't really a need for parenthesis as operator, it's just a question of precedence.
It's like saying addition operator is shit because it needs parenthesis if you do:
(a + b) * c

yes.

It would just become
a§(b + c)
most of the time.

...

What possible advantage does your "fix" have?

None, It just seems weird since the operator is bi-directional for it to be made up of parenthesis, which imply some sort of direction/hierarchy.

I'm not sure I follow.

Are you saying that this particular change ought to happen, regardless of the fact that it would single-handedly break virtually all existing C codebases?

>need to compile emacs from source
>there's this faggot
git.savannah.gnu.org/cgit/emacs.git/commit/?id=6133d656d0a62fa9b45d84d7f71862f577bf415c

>g9113oto
How the fuck can a

+ g9113oto imagemagick_error;

fucking what

he wasn't alone

...

>tfw freelancer
>tfw forced to deal with retarded POO IN LOO clients every day

A-at least I have muh freedom.

Are there any freelancer websites that aren't infested by that scourge?

Conceptually there is a direction. a[b] is offsetting a by b. It's just a quirk with the fact that offsetting is actaully addition, so the operator becomes bi-directional.
It's only something a turbo-autist would be bothered by.

I bet you think we should write if statements like
if condition1 && condition2)
do_thing();
because the first bracket isn't needed for it to be unambiguously parsed.

what are some good books about programming as a whole, not just about programming in one language?

contribute to open source project for free instead

He probably made that commit in emacs.

The C programming language.

What's next, only one parenthesis for functions?

-fno-rtti -fno-exceptions

if you are already on it: Why not use polish notation instead of infix operators? eliminates (in most cases) the need for brackets alltogether: § arr num
(or: § arr + num1 num2)

And how am I supposed to eat and pay rent?

yes I'm reading that
and I'd like to start another book too, but I don't want to go through two C books at once (or any other language for that matter)
so i picked up the tao of programming thinking i'd learn more about programming in general but it wasn't really about that

No, I don't.
I just question the decision of using parenthesis as bi-directional operator. Maybe there is a reason behind it I don't see, maybe it's really just that people didn't like to write code looking like .

>Questioning a decision == autism
I seriously hope you aren't allowed to vote yet.

>a[b] is offsetting a by b
int *arr = {...};

int i = arr[5]; /* offsetting arr by 5*sizeof(int) */
int i = 5[arr]; /* still offsetting arr by 5*sizeof(int) */
I don't really see the direction here, the compiler searches for what is int and what is pointer and does pointer + (number * pointer-type-size).

I would, if the following would be possible
int func(int arg);

int main(void){
return 5(func);
}

help me understand what this dude is saying

a[b] and b[a] are the same?

yeah but a[2][3] is not the same as 3[2][a]. keep in mind [] is postfix and chainable.

Database question:

I'm using mongodb (I know) as a database on a personal project. Is it bad practice to have two collections which hold the same information? I have a users collection, and within each user document there's an embedded document called polls that holds each user's created polls. Is it bad practice to also have a collection called polls which holds every individually created poll and its associated info - like who created it, etc.?

standard says

"""
The definition of the subscript operator [] is that E1[E2] is identical to (*((E1)+(E2)))
"""

Yet another NSA backdoor. Wow just wow.

>a[b] and b[a] are the same?
Yes. It's a quirk of the fact that a[b] and *(a + b) are exactly equivalent in C.
Since addition can be written in any order, *(b + a) is the same as *(a + b), so that means a[b] is the same as b[a].

>using NoSQL
>caring about duplicate data

why is Java GUI progrmaming such cancer

in C, yes

There is nothing "general" about programming. Most languages, excluding Haskell and Lisp, are no more than C with added sugar syntax and bloating.

Does anyone here know (without using google) why C has the -> operator? Why doesn't . just dereference pointers?

Would have not happen if Richard M. Stallman (PhD) were still in charge of maintaining Emacs.

I would call it a lesson in mediocrity, really.

Could be worse.

Could be better.

What if you don't deal with a pointer to a struct buth with a struct directly?

Finally fixed it!
git.savannah.gnu.org/cgit/emacs.git/commit/?id=97f35401771f782d5edc0dc6b841e054ca8685c3

then the . would work as usual.

>Most languages, excluding Haskell and Lisp, are no more than C with added sugar syntax and bloating.

>Lisp First appeared 1958; 58 years ago
>C First appeared 1972; 44 years ago


lel, k tard.

>I don't really see the direction here, the compiler searches for what is int and what is pointer and does pointer + (number * pointer-type-size).
uhhh no.
arr[5] is equivelant to *(arr + 5)
If you did basic math in school, you'd know that a + b = b + a, so arr[5] = *(arr + 5) = 5[arr] = *(5 + arr)
The compiler is not doing any pointer or int searching you fucking retard.

>That last bit
I sure am glad you didn't design C.

But C doesn't know if that number it is processing is a pointer or a value.

Trying to read the string.h source code to know how it works.

>excluding

Uh, yes it fucking does. Idiot.

i see, thanks. conceptually it seems bad in that writing b[a] indicates to me that b points to something i have a handle on

lol, the compiler knows exactly if the thing preceding dot operator is pointer to structure or the structure instance itself.

It's probably because it's easier for the compiler.

Reading standard header files in a nightmare, as they're littered with feature test macros and all sorts of other shit to make it work on many different compilers and platforms.
There isn't really a lot to learn there.

Questioning a decision when you yourself admit it doesn't matter is autism.

>It's probably because it's easier for the compiler.
Very doubtful.
If so then C compiler makers a VERY lazy people, and probably stupid.

adding to a pointer does check the size of the object the pointer points to.
It's just that I meant the mathematical '+' in my post and not the operator '+'.
int *i = 0;
char *c = 0;
printf("%u\n%u\n", i+1, c+1);

No, It's called trying to learn.

You're kinda right. I'm trying to figure out how an array of char turned into one big string and I can't seem to find any good explanation.

Because it uses Java.

Doesn't the whole a[b] = b[a]-thing blow up in your face if a and b have different sizes?
(as in: when C says a[b] = *(a+b) it actually means *(a + b*sizeof(*a)) or something like that)

The -> operator is just syntactic sugar like +=
a->b is identical to (*a).b

>look something up
>stackoverflow question is the first result
>decent accepted answer
>five answers by rahuls saying the exact same thing

see Also in a[b] one of the two has to be an integer and the other has to be a pointer/array.

>If so then C compiler makers a VERY lazy people, and probably stupid.
You need to understand how and when C was designed.
C was designed in the 70's, when compilers were very stupid. Also, C's designers were also the primary users, so a lot of things made it into C which were just for make writing the compiler easier.

Another thing is that C typically doesn't overload their operators very much. Most operators only do 1 thing, with the only exception being arithmetic operators (Note: unary * and binary * are different operators).

I used to do (*a).b when I learned about linked list and my code was so fucking messy that I almost stopped giving a fuck about linked list.

Glad libraries exist because I did something like *(*(a.next).next) on most of my code and it confuses the fuck out of me.

>It's probably because it's easier for the compiler.
Nope.

Alright, let me solve the quiz: In modern C (ANSI, 99, 11), there is no reason to have the -> operator.

Back in the olden days before ANSI, C had two features that made the -> operator useful:

First of all: Structure fields were not namespaced. The following code would not compile:

struct X {
int a;
int b;
};
struct Y {
int b;
int a;
};


Because the `a` and `b` field names are used twice.

If you've ever wondered why the fields of struct stat are prefixed with st_, this is why.

Second of all: The -> operator worked just like subscript operator. That is, a->b == b->a. You could in fact write things such as 5->st_size and it would be the same as *(5 + st_size). Remember that field names are not namespaced and just numbers denoting the offset of the field.

The recommendation in the picture you've linked is the opposite of what I'm asking about. I don't want to embed what could be another collection as a nested document instead.

C# is better than C and C++ because dealing with memory is a headache

>inb4 pajeet faggot

*tips hat with left hand*

pajeet my son
finally you are ready for enterprise

Thanks Rajesh! 4 Rupees has been deposited into your Microsoft® account.

I also forgot to say that C# has a working garbage collector.

favorite dialect of lisp?

dylan

The image illustrates that there's few good ways to deal with duplicate data in a NoSQL DB like Mongo.

If you want to avoid duplicate data, switch to an RDBMS.

C++

I agree.

does diaspora still exist?
lmao

Parenthesis are optional in Ruby, so its possible.

Cool. Write me a large C# project that never triggers the GC and tell me how fun that is.

Needing some advice here.

I just finished my Bachelor's degree in Physics and am now going to continue studying for a Master degree. A look at the job market showed me that the demand for physicist with programming skills is very high and thus i'd like to get into it.

My issue is now that I don't know on which programming language i should focus on/start with.
Any suggestions?

>write a large project with the requirement of unmanaged collection in a primarily managed language

I don't follow, what would be the purpose?

Do you think that the GC significantly impacts more than a fraction of 1% of all applications?

Any popular language will do.

When it comes to things like math majors, statisticians, etc. that use programming languages, the hiring manager is just looking to see that you understand the basics of programming.

They'll likely be using something specific like R, Julia, etc. which are relatively uncommon languages.

unfortunately, i think python is the answer here

C if you're serious about it
Or python if you want something easier

GC does not affect performance (unless you're creating an OS kernel using C# which is stupid).

>GC does not affect performance
hmmm

Use C if you need something powerful.

If you're into data processing, use Python or Java.

>GC does not significantly affect performance in the vast majority of applications*
Fixed.

You're inviting shitposts with statements like yours.

inb4 some autist spergs out about the GC and how it's the devil

If you use inheritance especially in the way it's taught at university then you sure as hell will write something like that. Composition is good and that hierarchy is also retarded.

The behavior of the modeled entity and the behavior of the object modeling it doesn't need to be the same.

This is true considering when you say "dealing with memory is a headache", but you don't say "but it's required to achieve x" then you obviously don't need manual memory management and GC is fine for your use case.

It's so sad that this still triggers so many people.

>GC does not affect performance
kek

>tfw posting on /dpt/ from emacs

no, but some for projects it does affect performance to the point where it's better to use C/C++. I wouldn't use an OS written in C# fro example.

>GC does not affect performance
Pajeet my son

>GC does not affect performance
public class Test2 {
public static void main(String[] args) {
long t = System.currentTimeMillis();
System.out.println(System.currentTimeMillis() - t);

long a = System.currentTimeMillis();
Runtime.getRuntime().gc();
System.out.println(System.currentTimeMillis() - a);
}
}

>0
>6

WEW
E
W

I don't believe you

Does running the gc in an empty program unironically take 6 milliseconds in Java? Who's arguing that this shit is useful?

kys tripfag

Thanks for the responses.
C sounds like a good starting point to me but which one of the C languages should I pick? C,C# or C++?

Also, my idea of learning the lagnuage was not only to do "dry theory" and to learn from books etc but also to actually decide to work on a bigger, fun project while learning it. I thought of creating a small quantum physics game to keep up motivation while learning.
it's probably a dumb question but: Which one of the 3 languages is capable of this?

I'd pick C++. This is the first language I've learned

So it wouldn't be a total faux pas to create this collection that consists of documents that are also embedded within the documents of a different collection?

Or would I be better off processing the data from my current, sole collection in a route handler in order to generate a data structure similar to the one that would be returned if I queried this second collection. Something tells me this option would scale very poorly, and that I'd be better off not placing that kind of workload on the server.

where is this from looks interesting

Ignore the faggot that explicitly said
>GC does not affect performance

Obviously, no one is advocating for a kernel to be written in C#.

>C sounds like a good starting point to me but which one of the C languages should I pick? C,C# or C++?
user, you're missing some education here.

C, C#, and C++ are nothing alike, and are by no means in the same family of languages. The fact that they all have a C in their name is relatively meaningless, much like a bear eats fish and a bearing facilitates reduction of friction.

Of those three languages, C# or C++ would be your best best.

C# is the easiest to get into of those three, but C++ should serve your purposes well enough.

What are you even trying to build?

You're be better off actually evaluating if MongoDB is your best bet.

Unless this is for a school assignment, or something.

In my experience, NoSQL is best used in places for caching relatively non-complex data, while main data stores are better off relational.

[cock | cocks

Write this in plain English, and I'll see what I can do.

What is the goal of this line of code?

>his C library is not namespaced

I need a small project to program for class, preferrably something useful.
I was thinking of a .cue file repair program & splitter, but it might be too uncomplicated.
Any ideas?
It sadly has to be java btw

Yes, it is. Fuck you.

apparently

also
test2.java:1: error: class Test2 is public, should be declared in a file named Test2.java
public class Test2 {
^
1 error
I forgotten about this and don't know how to feel about it ...

>his C library exports a function called open

Working on a discord bot in JavaScript github.com/KeckleKnight/Keckle-Bot

a C# program that writes itself in java

Go look at some other node repos and see how to structure it. You're is shit

>should be declared in a file named Test2.java
One of the many disgusting things about Java

Working on a discord bot in JavaScript github.com/KeckleKnight/Keckle-BotI'm just starting out, ik its shit, I'm gonna completely redo it soon

This is not bait.

Is there any reason to use Java? I feel like anything Java can do, C# can do better.

Wait, you can't have more than one public class in a Java file?

nope. Java is horror

Mfw

I use java to make bots for runescape (game writen in java)

Maintain old code.

Some obscure hardwares, but that's disappearing, too. C# runs on microcontrollers, Arduino, etc. now.

There is only one C language called C.

C is used to write high-performance libraries in simulation, numerical processing, etc.

C++ is widely used and has a relation to C. Can be essentially used to do the same as C. However interfacing with C is the de facto standard due to it's simplicity and that's why many libraries are written in C (apart from performance and bunch of inline assembly in some cases) and have a C interface.

C -> proper C++ transition might be hard for some. C++ gives you more concepts to deal with as it has higher level constructs.

C# targets a virtual machine and has a GC which means you need to deal very little with manual memory management.

However if you want to do programming close to physics (scientific computing) or other performance sensitive applications/libraries you will most likely deal with C/C++ or Python.

>Which one of the 3 languages is capable of this?
All of them.

I would recommend learning C then C#. For most low-level stuff C will be enough and C# is a high-level general purpose language.
C + Python is also a trend in sci computing. High-performance parts being written in C with the "glue" code and perf. insensitive parts being written in Python. However C# will give you a more traditional mature language experience.
C can teach you some low-level bits which are more useful than most people think and later if you which to specialize in library writing it will be essential.

Almost complete backwards compatibility with older versions.

In terms of just the language, no. But there are more jobs in Java than C#, and Android development is usually done in Java, though it can be done in C# these days too.

How do I edit this library?
I want to add a line inside the for() loop, but when I try and type, it doesn't let me.

There's a little padlock symbol next to the tab, how do I unlock and edit this?

Is libgdx a compiled dependency or do you have the actual source files?

I'm learning JAVA.

I like to write code like this:


public double average() {
if (this.amountOfNumbers == 0) { averageOfNumbers = 0; }
else { averageOfNumbers = (double) this.sumOfNumbers / this.amountOfNumbers; }
return averageOfNumbers;
}

ie. I write the curly brackets on the same line, not new line

I find it more aesthetically pleasing

Is it a bad practice to do this?

is there any other programmers who have a similar style?

where can I find about other styles?

I really dislike the GNU style

>this
>no this
Well nigga make up your mind

I'm blind. What you are viewing is what your ide decompiled the java bytecode to. You can't edit that. You need the libgdx source code to edit.

Horrible style. Just fucking horrible.

Yes I copy/pasted non-functional code
But that is not the point

Why?

Pretty sure those anons were talking about the C# GC.

Thanks again for your input, my decision falls on C.

>amountOfNumbers
The word "amount" doesn't mean what you think it does.

>where can I find about other styles?
en.wikipedia.org/wiki/Indent_style

The style is retarded,

I don't like it.

return (this.amountOfNumbers > 0) ? (double) this.sumOfNumbers / this.amountOfNumbers : 0;

Prepare to spend more time wrangling with the languages than actually testing functionality.

C# really is a pleasure to read compared to Java.

I think I'm doing the "Lisp style"
is that whack or is it master race?

I'm a total beginner, that's too advanced for me

The biggest problem with languages that are not C is that they can't include C headers, making working with them a pain and error prone.

Does the circled file look like it's source code?
I can't tell what I'm meant to look for.

Could you link me something, or tell me what I'm meant to do to edit, and then replace the original library?

I'm unsure what to Google to achieve this.

>(condition) ? (true) : (false)
Nothing really advanced

The biggest problem with C is that it's completely useless for most real-world problems.

What do you use it for, fizzbuzz?

>Implying something useful not using C exist
Are you even a programmer?

>911
>
>The biggest problem with C is that it's completely useless for most real-world problems.

In your IS shithole it for sure is useless.

public double average() {
if (this.amountOfNumbers == 0) {
this.averageOfNumbers = 0.00; }
else {
this.averageOfNumbers = (double) this.sumOfNumbers / this.amountOfNumbers; }
return this.averageOfNumbers;
}


Is this any better? I fixed it.

I really can't stand seeing the ending " } " in a separate line

it looks so fugly

Do I really need to do it?

>Do I really need to do it?
You don't need to do anything.

What's the matter?

public double average() {
if (this.amountOfNumbers == 0) {
return 0.0;
} else {
return (double) this.sumOfNumbers / this.amountOfNumbers;
}
}

Pick a brace style and keep it consistent.

An ending brace at the end of an arbitrary line of code is NOT good practice.

It's hard to read and invites a greater possibility of putting a line of code that should have be in the block, elsewhere.

I mean I hate seeing this:

}
}
}


stuff like that....

like, if it's a long block of code, sure, writing } in new line makes sense

but if it's just a few statements seems more logical to put } at the end of the statements... and more beatuiful

maybe I'm just autistic

>listening to sicp lectures on the train
so comfy

kill yourselves

You should try assembly. It doesn't have braces.

public double average() {
if (this.amountOfNumbers == 0) {
return 0.0;
} else {
...

doesnt

>} else {

look ugly to you?

isn't it nice like this:

public double average() {
if (this.amountOfNumbers == 0) {
return 0.0; }
else {
....

>every developer has preferred pronouns in their bio now
when did this start

>isn't it nice like this:
No, you stupid fucking faggot.

Function > Form

return 0.0; }
Triggers my autism and makes me want to kill kitten with a flamethrower.

See

public double average()
{
if( this.amountOfNumbers==0 )
{
return 0.0;
}
else
{

>Pajeet match over 9000%

public double average ()
{ if (this.amountOfNumbers == 0)
{ return 0.0
;} else
{ return (double) this.sumOfNumbers / this.amountOfNumbers
;}}

>tfw you run gofmt before every save and rob pike takes care of all this autism for you

Objectively: kernel.org/doc/Documentation/CodingStyle

pls get me into Óbudai Egyetem.

Mux question.

So lets say there's two inputs, a and b, with a variable length, meaning they can be any number of bits.
And we also have c to choose between a and b.

How would I describe this mux in verilog?

If a and b were 1 bit, the output would be something like
z = ((a & c) | (b & ~c))
but this wouldn't work if a and b were a different length than c.

Thanks.

/* Copy-paste me in your Java main :^) \u002a\u002f\u0053\u0079\u0073\u0074\u0065\u006d\u002e\u006f\u0075\u0074\u002e\u0070\u0072\u0069\u006e\u0074\u006c\u006e\u0028\u0022\u006b\u0065\u006b\u0022\u0029\u003b\u002f\u002a */

HELP

I NEED PICTURES OF CUTE ANIME GIRLS WITH CODE BOOKS

Nicely memed.

Don't do this, it makes mustard gas.

...

Hi newbie python user. im trying to create a FASTA file which is just line 1 header line 2 DNA sequence

But I keep getting either the header and DNA seq on the same line or I'm getting the Header on line 1 and the DNA seq on line 3. any help would be much appreciated.

#open the fasta file
my_file= open("dna.fasta")
#read the fasta header first and assign it to the variable fasta_header
fasta_header= my_file.readline()
#read the DNA sequence second and assign it to the variable dna_seq
dna_seq= my_file.readline()

#Find the starting position ATG
start= dna_seq.find("ATG")
#assign the contents of dna_seq, from position 50 to the end of the string, to the variable exon1
exon1=(dna_seq[50:-1])
#print the final sequence of exon1
print("The sequence for exon 1 is: " + str(exon1))
my_file.close()


outfile= open("dna_exon1.fasta","w")
outfile.write("%s\n%s"%(">hg38_geneX_exon1_ORF",exon1))
outfile.close()

>there are "professional" programmers who don't understand how to properly use atomic operations
I hope you don't do this

Sorry, son I'm dead. In hell the only place we can access with good speed on the computers is Cred Forums. (I wonder why.)

Anyway applications for fall start has ended a long-time ago. Application for mid-semester MSc start haven't started yet?
Which degree did you apply to? EE? ME? CE? BSc or MSc?

What are these? Operations that are thread safe basically?

Are you a professional programmer?

>not fully understanding all of the components of ACID

Here's C# garbage collection in action

Yes

Are you unironically telling me that those loops will not be optimized away?

How can i check why my webapp is working slow?
Is it because of querys etc
Is there any tool?

You fucking faggot, you fucked up the non GC timer. You never start it.

Fuck me, I saw the flaw.

Here's the real test.

Dumb fucking faggot.

>without GC takes longer
solid benchmark

>Garbage collection confirmed to speed up code.
HMMMMM

Refer here:

What made you want to save your code as an .xps anyways?

What the heck are you even benchmarking?
Holy fucking shit. That code.

I don't have a PDF printer installed on the computer I'm using.

Why export it to a pdf/xps in the first place?

I...what?

I decided to learn c++. Is this a good a place to start
learncpp.com/
?

Because my screen is too small to screenshot the whole code and shrinking the size on Visual Studio makes it unreadable.

So shrinking the size in the xps is readable, but shrinking the text size in VS is unreadable?

It's how most people export documents to PDF.

Yes. At least for me.

i've determined that homebrew is fucking horrible

...

I'm assuming you're not talking about the game.

Although if you were, your assessment would still be accurate.

I second this inquiry.

We're learning how to do scrum this time.

My team alloted 2 hours to creating a git repo.

It took two members 6 hours each. This woman needs to fucking stop holding so tightly onto our git repo, because she doesn't understand how to fucking work it. I consider myself a newbie at git, but I know how to fucking merge. She doesn't. She wants to be the sole person responsible for pulling feature branches into the development branch.

>She
There's your problem user