So I've got a simple C program that runs in 40 milliseconds on my x86 (1.6GHz Intel Atom). 40 milliseconds is not fast enough for me; I want it to happen in under 10 milliseconds. How do I optimize my C code? What are the sequence of steps that a programmer takes when optimizing code? How do I profile my program and find out what parts I need to refactor/ use a better algorithm, etc.

At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.

Get our expert's

answer on brainly

SEE EXPERT ANSWER

Get your free account and access expert answers to this and thousands of other questions.

A community for students.

So I've got a simple C program that runs in 40 milliseconds on my x86 (1.6GHz Intel Atom). 40 milliseconds is not fast enough for me; I want it to happen in under 10 milliseconds. How do I optimize my C code? What are the sequence of steps that a programmer takes when optimizing code? How do I profile my program and find out what parts I need to refactor/ use a better algorithm, etc.

Computer Science
I got my questions answered at brainly.com in under 10 minutes. Go to brainly.com now for free help!
At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.

Get this expert

answer on brainly

SEE EXPERT ANSWER

Get your free account and access expert answers to this and thousands of other questions

so far all i've done is use gcc -O2 . what else can i do to optimize my program to run in my calculator?
did you try with this ? -Ofast "Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard compliant programs. It turns on -ffast-math and the Fortran-specific -fno-protect-parens and -fstack-arrays."
Will you please provide the source code, so that I can have a look on it? You should not depend blindly on optimizations provided by compilers. Before optimizations, know when to optimize, what to optimize, and how to optimize. To answer first problem, if performance improvement is significant without TOOOOOOOOOO MUCH headach, go for it! To answer second problem, use a profiler. It will show you where most of the processing time is consumed in your program. Optimize that part first. For third one, you may use a better algorithm, or employ some *trikcy* fast solutions. It depends upon the case.

Not the answer you are looking for?

Search for more explanations.

Ask your own question

Other answers:

I just tried -Ofast with no luck :( still giving me about 40 milliseconds. here is th e source: #include "prog.h" int main(int argc, char *argv[]) { switch (argc) { case 1: solve_from_stdin(); break; default: return -1; break; } return 0; }
Please provide prog.h as well! In case you are working on some secret project, you may use profiler(s) or ask other project members.
which way are you measuring the time ?
bash's builtin time command
did you try removing anything else but the pure main to have the minimum execution time ?
this is what prog.h looks like #ifndef _PROG_H_ #define _PROG_H_ unsigned int a(int *, int *, const unsigned int, const unsigned int); unsigned int b(int *, int *, const unsigned int, const unsigned int, const unsigned int); unsigned int c(int *, const unsigned int); void solve_from_stdin(void); #endif /* ifndef _PROG_H_ */ what prfiler should i use?
AMD APP Profiler is a free C/C++ Profiler
Intel Parallel Studio also contains a profiler.
In the header file you just have declarations, not definitions, so nobody can understand what the code really does; but could you try to execute a main() without the function call and report the measured execution time ? #include "prog.h" int main(int argc, char *argv[]) { switch (argc) { case 1: // solve_from_stdin(); break; default: return -1; break; } return 0; }
but those kits are both exclusive to Windows/visual studio :( without the solve_from_stdin(), time outputs 0.000 :-D so that one routine is taking 99.9% of cpu time :D
Intell Parallel Studio is available for LINUX as well.
that's awesome I'll check my package manager then
alright I have intel parallel studio xe in my package manager
ok, now reactivate the call to the function but eliminate any action in the function body, then go on reactivating part of the code in the function body until you can understand which part is taking more execution time
old times profiling style :-)
okay I deactivated procedure a() and I'm also getting 0.000 from bash
but procedure a() calls procedure b()
then reactivate procedure a() but not procedure b()
that decreased my time from 40 to 22 ms
does procedure a() do anything else apart calling procedure b() ?
procedure a also calls itself before calling procedure b
it sounds strange, it should loop forever, unless there is some kind of counter to avoid it
yeah at the start it tests for the value of (const int $1 + const int $2) / 2
is the function solve_from_stdin() recursive ?
here's procedure b() void merge_int(int* left, unsigned int len_left, int* right, unsigned int len_right, int* end) { unsigned int i, j, k; for (i = j = k = 0; i < len_left && j < len_right; ++k) { if (left[i] < right[j]) { end[k] = left[i]; ++i; } else { end[k] = right[j]; ++j; } } for (; i < len_left; ++i, ++k) { end[k] = left[i]; } for (; j < len_right; ++j, ++k) { end[k] = right[j]; } }
is the b() procedure called just once in the a() procedure ?
right just once
but since a() calls itself recursively, it ends up calling b quite a lot of times :)
if so you could try to directly write the b() procedure's content into the a() procedure body, so that you save one function call time (context saving time into the stack)
I got a seg fault :(
did you use correct types for left, right, end variables ?
alright I'm going to refurbish my code and use a different data structure and see how it goes...

Not the answer you are looking for?

Search for more explanations.

Ask your own question