Monday, February 28th 2011

New CUDA 4.0 Release Makes Parallel Programming Easier

NVIDIA today announced the latest version of the NVIDIA CUDA Toolkit for developing parallel applications using NVIDIA GPUs. The NVIDIA CUDA 4.0 Toolkit was designed to make parallel programming easier, and enable more developers to port their applications to GPUs. This has resulted in three main features:
  • NVIDIA GPUDirect 2.0 Technology -- Offers support for peer-to-peer communication among GPUs within a single server or workstation. This enables easier and faster multi-GPU programming and application performance.
  • Unified Virtual Addressing (UVA) -- Provides a single merged-memory address space for the main system memory and the GPU memories, enabling quicker and easier parallel programming.
  • Thrust C++ Template Performance Primitives Libraries -- Provides a collection of powerful open source C++ parallel algorithms and data structures that ease programming for C++ developers. With Thrust, routines such as parallel sorting are 5X to 100X faster than with Standard Template Library (STL) and Threading Building Blocks (TBB).


"Unified virtual addressing and faster GPU-to-GPU communication makes it easier for developers to take advantage of the parallel computing capability of GPUs," said John Stone, senior research programmer, University of Illinois, Urbana-Champaign.

"Having access to GPU computing through the standard template interface greatly increases productivity for a wide range of tasks, from simple cashflow generation to complex computations with Libor market models, variable annuities or CVA adjustments," said Peter Decrem, director of Rates Products at Quantifi. "The Thrust C++ library has lowered the barrier of entry significantly by taking care of low-level functionality like memory access and allocation, allowing the financial engineer to focus on algorithm development in a GPU-enhanced environment."

The CUDA 4.0 architecture release includes a number of other key features and capabilities, including:
  • MPI Integration with CUDA Applications -- Modified MPI implementations automatically move data from and to the GPU memory over Infiniband when an application does an MPI send or receive call.
  • Multi-thread Sharing of GPUs -- Multiple CPU host threads can share contexts on a single GPU, making it easier to share a single GPU by multi-threaded applications.
  • Multi-GPU Sharing by Single CPU Thread -- A single CPU host thread can access all GPUs in a system. Developers can easily coordinate work across multiple GPUs for tasks such as "halo" exchange in applications.
  • New NPP Image and Computer Vision Library -- A rich set of image transformation operations that enable rapid development of imaging and computer vision applications.
  • o New and Improved Capabilities
    o Auto performance analysis in the Visual Profiler
    o New features in cuda-gdb and added support for MacOS
    o Added support for C++ features like new/delete and virtual functions
    o New GPU binary disassembler
A release candidate of CUDA Toolkit 4.0 will be available free of charge beginning March 4, 2011, by enrolling in the CUDA Registered Developer Program here. The CUDA Registered Developer Program provides a wealth of tools, resources, and information for parallel application developers to maximize the potential of CUDA.

For more information on the features and capabilities of the CUDA Toolkit and on GPGPU applications, please visit: http://www.nvidia.com/cuda
Add your own comment

77 Comments on New CUDA 4.0 Release Makes Parallel Programming Easier

#1
Fourstaff
by: ToTTenTranz
Windows allows hardware differentiation and promotes hardware competiteveness.
CUDA does not.
At most, you could compare it to MacOS X, since it only supports whatever hardware that Apple choses to include in their computers at a given time.

Regardless of how well seen it is from a developer's point of view, it's just one more method for nVidia to try to sell more hardware with an exclusive computing API.
Perhaps that is so, but given that the prices of comparative AMD and Nvidia cards are around the same, so it should not be too much of an issue: if you want OpenCL, then you buy AMD (its slightly cheaper in a lot of cases), if you want CUDA, you go for Nvidia.

I still do not get why Nvidia gets soo much shit for making such a good product, while Intel's (and a bunch of other companies') USB gets no flak.
Posted on Reply
#2
Cheeseball
Actually, if one plans to utilize OpenCL in it's entirety, you can go either NVIDIA or AMD as both implementations are fully supported. NVIDIA currently has the lead in OpenCL applications in regards to performance.
Posted on Reply
#3
JEskandari
by: Mussels


uhhh... no. if someone started slapping CUDA on their products in any way, even advertising on the box, nvidia would sue their asses off.

you have to pay, and get nvidias approval to use cuda for a commercial product. Hell, look how much of a tightarse they've been with hardware accelerated physX, which runs on CUDA.
Well here there is some miss understanding

CUDA is royalty Free and you don't need to pay anything
But it's not the Case about PhysX , its not free and if you want to use
it on your Hardwares you must pay for the Licenses fee .
Posted on Reply
#4
Cheeseball
That is correct. If a commercial product will be using PhysX, one will need to apply for a license with NVIDIA. However, if you use CUDA/OpenCL to create your own physics implementation, it's free, but of course that entails more research on the developer's side.
Posted on Reply
#5
Mussels
Moderprator
lets clear something up.


free to code a program to use cuda, is not free to enable your own video cards to run CUDA. no one but nvidia is allowed to accelerate cuda on their hardware. this is why we say its not open source. (whether the wording is correct or not, THAT is what we mean)


if it was truly open like openCL, everyone could implement it free.
Posted on Reply
#6
Bjorn_Of_Iceland
This thing is sleek and has good support and dev community. Nuff said.
Posted on Reply
#7
JEskandari
by: Mussels
lets clear something up.


free to code a program to use cuda, is not free to enable your own video cards to run CUDA. no one but nvidia is allowed to accelerate cuda on their hardware. this is why we say its not open source. (whether the wording is correct or not, THAT is what we mean)


if it was truly open like openCL, everyone could implement it free.
well you may not believe it but there is no need to get a license to develop
a CUDA Driver for AMD Cards

well it's the attitude when it come to this matter
Still, for Cuda to be able to work on AMD GPUs, Nvidia would absolutely need AMD's support. Without it, Nvidia wouldn't be able to get low-level programming access to the GPU to develop the API. Even Nvidia admits that AMD would probably never allow this to happen. As for AMD, the company's point man on Stream seemed amazed we'd even asked.

AMD's Gary Silcott told the INQ "they [Nvidia] would intentionally damage performance to make Nvidia GPUs run the same app better." Then, perhaps thinking better of accusing Nvidia of hypothetical, yet outright, sabotage, Silcott added "Even if it wasn't intentional, it would not be optimized for our instruction set architecture like our own SDK."

That's okay though, since Nvidia has no intention of adapting its GPUs for AMDs technology either. "No, I don't see us supporting Steam..." said Nvidia's Derek Perez acidly when we asked him for his response. µ
it was from The Inquirer
Posted on Reply
#8
Benetanegia
by: ToTTenTranz
And you fail to understand that it is hurting the development of OpenCL while feeding a vendor-specific competitor API to the developers.
You also fail to understand that this has been nVidia's strategy for quite some time.
As Jen-Hsu Huang said, "were a software company".
Sure, it's been there for longer.
And so was Glide, when it came down.
lol and Glide was a superb thing for as long as it lasted, way better than OpenGL and DirectX up to a point in which OpenGL catched up and later DirectX and it was THEN when developers started using OpenGL/DirectX and put Glide aside. That's how tech has to be abandoned, open or not. That's how CUDA has to go away and not because they simply drop it.
lol, wrong. Costs go way down if you adopt open source software.
That is false, since CUDA is free.
And what you fail to understand is that nVidia could do that same optimization in OpenCL to start with.
And they are doing it! But on one hand CUDA has a 2 year headstart and will always have that headstart, it will always have 2 years more of working together developers and on the other hand CUDA belonging to Nvidia can be (and IS) updated according to developers desires almost instantly something that OpenCL cannot offer because of it's standardisation progress.

So until OpenCL can offer exactly the same, CUDA must exist because that's what is best for developers and that's what it's best for us, because without it developers could not create the amazing things they are doing with CUDA right now, they would have to wait 1 or 2 years until OpenCL is in that same state as CUDA is right now. That would not benefit anyone, the fact that you need a Nvidia GPU is completely irrelevant, you can at least have access to those applications which you would not have otherwise.

Anyway, did you even know that Nvidia has always been first with their OpenCL conformant drivers? No? Check it: http://www.khronos.org/adopters/conformant-products/
2 years?!?? LOL. I just made a list of eight GPU vendors pushing OpenCL 1.1 compatibility in their latest GPUs right now.
Exactly. Pushing OpenCL compatibility NOW. CUDA exists since 2006. That's a quite a difference. That is what you fail to understand.

by: Mussels
because everyone is welcome to use directX (video card manufacturers). the same is not true for Cuda. CUDA is exclusive to nvidia hardware. they never offered it to AMD, that was a rumour that had zero fact behind it.
You are forced to use Windows, just the same way you are forced to use a Nvidia card. Supporting DirectX and not supporting CUDA based on the principle of their "openess" is NONSENSE. Both are propietary, free to use APIs and there's no difference at all. Anyone can use DirectX for free, anyone can use CUDA for free. If you want to be able to use it you need Windows, if you want to be able to use it you need a Nvidia card.
Posted on Reply
#9
Mr McC
by: Benetanegia
...the fact that you need a Nvidia GPU is completely irrelevant...
Not to me.
Posted on Reply
#10
Benetanegia
by: Mr McC
Not to me.
It may not be for you, but it sure is for humanity. Have you seen the medical imaging applications they've created with CUDA? Have you read what the medics think about it? How helpful it is for early prognostication and how many lifes can potentially save? The world sure is better with those apps than without them and having them NOW is way better than having them in 2 years from now. Period.

And that is only one of the applications. Thanks to CUDA (GPGPU actually, but we have discussed just enough why for the time being GPGPU == CUDA) they are going to be able to predict hearthquakes or tornados better, they can find out how molecules interact, they can find starts and other worlds, they will find minerals more easily, facial/GPS image recognition can help detect terrosists and terrorist camps. This is real not science fiction. Is having all those things now better than having to wait yet another 2 years? Definately. The fact that they need to use a Nvidia card for doing so is minuscule and completely irrelevant.

And like always, you are missing the point anyway. Open standards are better than propietary ones, and you should know by now that's also my POV, but actual applications are better than none and OpenCL it's been here for only 1 year or so, and it's not yet as good as CUDA. Most of the GPGPU applications in existence are CUDA based because CUDA exists since 2006 and it's in that time period when they started working on them. The world is definately better with those apps (even if just a little bit, it is better) and having CUDA updated so that those apps can get better and better is only a good thing for everyone. Wanting the demise of CUDA only so that it lefts the place for an API that, as it stands now, is inferior is evil and selfish.
Posted on Reply
#11
Mr McC
by: Benetanegia
It may not be for you, but it sure is for humanity. Have you seen the medical imaging applications they've created with CUDA? Have you read what the medics think about it? How helpful it is for early prognostication and how many lifes can potentially save? The world sure is better with those apps than without them and having them NOW is way better than having them in 2 years from now. Period.

And that is only one of the applications. Thanks to CUDA (GPGPU actually, but we have discussed just enough why for the time being GPGPU == CUDA) they are going to be able to predict hearthquakes or tornados better, they can find out how molecules interact, they can find starts and other worlds, they will find minerals more easily, facial/GPS image recognition can help detect terrosists and terrorist camps. This is real not science fiction. Is having all those things now better than having to wait yet another 2 years? Definately. The fact that they need to use a Nvidia card for doing so is minuscule and completely irrelevant.
Nvidia helps sick people and puppy dogs, and there I was thinking that it was simply another company engaged in cut-throat capitalist marketing practices. We should build them a shrine.

Think how many people used Internet Explorer a few years back. Internet Explorer has undoubtedly saved many lives and puppy dogs, but Firefox, an open-source competitor, has managed to eat up a sizeable chunk of the market in a relatively short period of time and has probably saved a number of lives and puppies. Perhaps your estimate of 2 years to make the transition is overly pessimisitic.

Life saving and puppy dogs are not inherent to the technology you are discussing and if open-source alternatives exist that can serve the same purpose, they should be investigated and afforded preferential treatment.

Moreover, citing Windows use as an example of the hypocrisy of those who criticise the proprietary nature of CUDA adds no weight to your argument: two wrongs do not make a right.
Posted on Reply
#12
pr0n Inspector
by: Mr McC

Think how many people used Internet Explorer a few years back. Internet Explorer has undoubtedly saved many lives and puppy dogs, but Firefox, an open-source competitor, has managed to eat up a sizeable chunk of the market in a relatively short period of time and has probably saved a number of lives and puppies. Perhaps your estimate of 2 years to make the transition is overly pessimisitic.
Holy crap IE was technically superior than Mozilla/Firefox as CUDA is to OpenCL?

This feels like Inception.
Posted on Reply
#13
Mr McC
by: pr0n Inspector
Holy crap IE was technically superior than Mozilla/Firefox as CUDA is to OpenCL?

This feels like Inception.
Haven't seen the film yet.

Don't read too much into the comparison, my point is that open-source alternatives exist and can rapidly achieve mass acceptance, provided they are given the chance.
Posted on Reply
#14
Benetanegia
by: Mr McC
Nvidia helps sick people and puppy dogs, and there I was thinking that it was simply another company engaged in cut-throat capitalist marketing practices. We should build them a shrine.

Think how many people used Internet Explorer a few years back. Internet Explorer has undoubtedly saved many lives and puppy dogs, but Firefox, an open-source competitor, has managed to eat up a sizeable chunk of the market in a relatively short period of time and has probably saved a number of lives and puppies. Perhaps your estimate of 2 years to make the transition is overly pessimisitic.

Life saving and puppy dogs are not inherent to the technology you are discussing and if open-source alternatives exist that can serve the same purpose, they should be investigated and afforded preferential treatment.

Moreover, citing Windows use as an example of the hypocrisy of those who criticise the proprietary nature of CUDA adds no weight to your argument: two wrongs do not make a right.
Why Nvidia does it is IRRELEVANT. The fact is that CUDA now allows the creation of apps that OpenCL does not yet. So since having those apps is better than not having them CUDA is a good thing and we should support it until OpenCL is an equal option, we should never bury it. And following your example, no IE did not safe those puppies in the same way CUDa is doing. At any rate it would be Netscape who saved those puppies. And yeah later came IE and later Firefox. And you know what? That's a good thing. I use Firefox. ;)

But would have been the world (the internet) better without the propietary Navigator? Would have been better without IE, which became better than Navigator because Netscape stopped improving Navigator? It would not. I'm sorry but without Navigator there would be no IE and without either of them Firefox (or any open source code for that matter) would have never been posible, since it was created by people on the net, using the propietary software they were trying to displace.

So in two or 3 years we we'll all be using OpenCL apps and CUDA will be a thing of the past or something that catters to a shrinking niche or something. And it will be a good thing. But for the time being CUDA is nothing but a good thing. Simple as that.
Posted on Reply
#15
Mr McC
by: Benetanegia

But would have been the world (the internet) better without the propietary Navigator? Would have been better without IE, which became better than Navigator because Netscape stopped improving Navigator? It would not. I'm sorry but without Navigator there would be no IE and without either of them Firefox (or any open source code for that matter) would have never been posible, since it was created by people on the net, using the propietary software they were trying to displace.
You assume too much. Hypotheses about what would have happened are inevitably only that: guesswork. Why assume that the vaccuum created in our hypothetical world would not have been filled by something else, which in turn would have influenced all subsequent developments?

Nobody is arguing that CUDA is not helpful as things stand, the argument is that it could and should be replaced by a non-proprietary alternative and how long that would take. But as you seem to be saying just that, we appear to be agreeing, although I will not easily forget that you have tried to present Jen-Hsun Huang as Mother Teresa of Calcutta in a CUDA thread.
Posted on Reply
#16
Benetanegia
by: Mr McC
You assume too much. Hypotheses about what would have happened are inevitably only that: guesswork. Why assume that the vaccuum created in our hypothetical world would not have been filled by something else, which in turn would have influenced all subsequent developments?
Yeah and the world could have been better if the Persian Empire had won the Greco-Persian wars. Cristianity instead of evolving under the Roman Empire could have evolved under the Persian Empire, where the Emperor was regarded as a God. Maybe the Emperor of such a world in such circumstances would have assumed the role of the christian God (because of the same pressure that force Romans to adopt Christianity) and rule the world with fairness and love until the end of days. Maybe. Maybe tht would have saved the world from all the murdering that is and has been done in the name of religions. Maybe...

You know where I'm going? The thing is that the Greeks won, democracy survived, and we live in a mostly democratic world which we know is better than living under dictatorship.
Nobody is arguing that CUDA is not helpful as things stand, the argument is that it could and should be replaced by a non-proprietary alternative and how long that would take. But as you seem to be saying just that, we appear to be agreeing, although I will not easily forget that you have tried to present Jen-Hsun Huang as Mother Teresa of Calcutta in a CUDA thread.
I have at no point mentioned Jen-Hsun. I have never said what you claim I said. We could argue forever about if guns are evil or not. I'm of the particular people who thinks they are evil, in the sense of that they are created to kill and are used to kill. But I'm very well aware of the fact that they can and are used for good puposes and in the particular case in which a gun is used to stop someone from killing innocent people, that gun is a good thing. Hence the existence of that gun is good.

I'm just saying the same about CUDA. I don't care who has created CUDA and what benefits they may get from it, it's irrelevant for me and it's irrelevant for humanity. I know how CUDA is being used and that it's good. It has created a market that is in expansion and has allowed for things that would not be posible without it and that's only good thing. I now that if it's constantly updated it's better than if it's not updated. Wanting it to go away, in the way that you guys seem to want it to go is just as "evil" and nonsensical as wanting the gun I mentioned above to dissapear before the shot that saves those innocent lives has been made.
Posted on Reply
#17
ToTTenTranz
by: Benetanegia
lol and Glide was a superb thing for as long as it lasted, way better than OpenGL and DirectX up to a point in which OpenGL catched up and later DirectX and it was THEN when developers started using OpenGL/DirectX and put Glide aside.
(...)
Exactly. Pushing OpenCL compatibility NOW. CUDA exists since 2006. That's a quite a difference. That is what you fail to understand.
Completely wrong.
Glide was abandoned when relevant GPU makers appeared, that used non-vendor-specific APIs. It was abandoned when PowerVR, nVidia, ATI, S3, Matrox, Rendition, etc started to take a significant share of the consumer 3D graphics card market, and 3dfx wasn't the stand-alone king of 3d accelerators anymore.

It had nothing to do with technological advance or OpenGLDirectX catching up in terms of features <- that's what you fail to understand.



by: Benetanegia
(...) the fact that you need a Nvidia GPU is completely irrelevant (...)
lol



by: Benetanegia
Anyway, did you even know that Nvidia has always been first with their OpenCL conformant drivers? No? Check it: http://www.khronos.org/adopters/conformant-products/
And 3DFx was the first with DirectX drivers, despite having Glide. And Creative was the first with OpenAL drivers, despite having EAX.
Point being?
Posted on Reply
#18
Benetanegia
by: ToTTenTranz
Completely wrong.
Glide was abandoned when relevant GPU makers appeared, that used non-vendor-specific APIs. It was abandoned when PowerVR, nVidia, ATI, S3, Matrox, Rendition, etc started to take a significant share of the consumer 3D graphics card market, and 3dfx wasn't the stand-alone king of 3d accelerators anymore.

It had nothing to do with technological advance or OpenGLDirectX catching up in terms of features <- that's what you fail to understand.
That is false. By the time 3Dfx Voodoo 3 was still the king, OpenGL (and to a lesser extent DirectX) was already eating on Glides market share (Glide was noting but a subset of OpenGL btw). PowerVR and Rensition existed looooong before Glide's demise and did not influence one bit. Glide+3dfx was a better combo and so game developers used it. When the combo was not the best, they started using alternatives. And that's what needs to happen with CUDA. CUDA does not have to go away for that to happen.

I find it funny you name 3dfx as irrelevant GPU maker. It shows your ignorance on the matter...
And 3DFx was the first with DirectX drivers, despite having Glide. And Creative was the first with OpenAL drivers, despite having EAX.
Point being?
POint is that without their support they would have never been posible. You are asking Nvidia to support OpenCL and they are doing it, more so and faster than AMD, S3 bla bla bla. So your point of Nvidia not supporting OpenCL is moot.
Posted on Reply
#19
cheesy999
by: Benetanegia
POint is that without their support they would have never been posible. You are asking Nvidia to support OpenCL and they are doing it, more so and faster than AMD, S3 bla bla bla. So your point of Nvidia not supporting OpenCL is moot.
Exactly, For all the arguing the entire point of NVIDIA is to sell GFX cards

NViDIA Developed CUDA to sell More cards

NVIDIA did not develop CUDA to Heal sick puppies - that is the Job of the people buying their cards

Developing CUDA for ATI cards would cost money and have no benefit to them

They were also the first to support open CL - proving the first point, they don't care about the amount of people using CUDA, they just want to use GPU processing as an advertising point

No open CL programs are not NVIDIAs fault, it is the programmers not writing programs for them

In the future new programs may use open CL and existing programs may move over to it, this will depend entirely on whether the programmers feel the extra 30% of pc users they will reach) outweighs the effort of recoding or coding in CUDA

http://store.steampowered.com/hwsurvey

Most programs take about 2-3 years to make
Posted on Reply
#20
Fourstaff
by: Mussels
free to code a program to use cuda, is not free to enable your own video cards to run CUDA. no one but nvidia is allowed to accelerate cuda on their hardware. this is why we say its not open source. (whether the wording is correct or not, THAT is what we mean)
True, but on the other hand, you can spend roughly as much and get a (currently) more inferior product in OpenCL. Also, if you get an Nvidia card, you can do BOTH OpenCL and CUDA, think of CUDA as a bonus.

I heard OpenCL was a Mac project, how come it became opensource? Perhaps Nvidia will make CUDA opensource sometime in the future, eh?
Posted on Reply
#21
ToTTenTranz
[quote="ToTTenTranz]It was abandoned when PowerVR, nVidia, ATI, S3, Matrox, Rendition, etc started to take a significant share of the consumer 3D graphics card market, and 3dfx wasn't the stand-alone king of 3d accelerators anymore.[/quote][QUOTE=Benetanegia, post: 2208620"]I find it funny you name 3dfx as irrelevant GPU maker. It shows your ignorance on the matter...[/quote]And claiming I named 3dfx as irrelevant shows your a.d.d.?



by: Benetanegia

That is false. By the time 3Dfx Voodoo 3 was still the king, OpenGL (and to a lesser extent DirectX) was already eating on Glides market share (Glide was noting but a subset of OpenGL btw).
Now here's some ignorance on your part. Voodoo 3 was never a king at its time. It had been projected to beat Riva TNT, but it came out so late that it faced TNT2 at its launch.
It had lower performance, fewer features (16-bit color render only) and higher price than TNT2. Voodoo 3 sales were abysmal, as people preferred TNT2 for performance and Matrox G400 for image quality.
Add that to the fact that 3dfx had just made their graphics card production exclusive to STB and the Voodoo 3 was the beginning of 3dfx's downfall.




As Î said, Glide had to go when several competing (and competitive) GPUs appeared in the market that couldn't support it.

When GPGPU was a thing that only nVidia poured real money into (even though ATI actually started it with R520's close-to-metal), CUDA may have had some sense.
Now we have nVidia with 8xxx onwards, ATI with HD4000 onwards, the whole PowerVR SGX line + future series 6, ARM Mali T604, Vivante GC400-4000, Broadcom Videocore IV, S3 5400E and many others with full OpenCL compatibility.
It's time to let CUDA go and invest into OpenCL.
Posted on Reply
#22
Fourstaff
by: ToTTenTranz

It's time to let CUDA go and invest into OpenCL.
Last time I checked, they are supporting both.
Posted on Reply
#23
ToTTenTranz
by: Fourstaff
Last time I checked, they are supporting both.
Yes, and my argument is that they should be supporting only one: OpenCL.
Posted on Reply
#24
leonard_222003
While Cuda has been very very good for video editing apps ( adobe ) and some other areas i don't know about ( science ?? ) it think it will come to an end at some point.
Bottom line what cuda brought of real value to the world ? in games was physics and in content creation was faster transcoding speed , look at intel's sandy and the speed at wich it transcodes compared to cuda or amd stream , it's not even funny how it decimates them , that parallelization probably will work in physcs too ( intel having havok ) if this will be of any interest for intel.
When the integration of graphics in cpu's was promised i was thinking of cheap tiny laptops that consume little power but it seems more can be gained from this , intel already developed a plugin for adobe for using quick sync , maybe it's not ready as it should be cuda took a lot of time too to mature , i believe intel will mature this tehnology cause they have the money , amd is a lost hope of course.
Posted on Reply
#25
Fourstaff
by: ToTTenTranz
Yes, and my argument is that they should be supporting only one: OpenCL.
And abandon the CUDA community to rot? That is harsh, seeing that there are quite a few people already using CUDA. Supporting and gradually releasing CUDA away is the correct route, imo, but supporting both CUDA and OpenCL at the same time shouldn't hurt anybody, should it?
Posted on Reply
Add your own comment