Friday, December 9th 2016

BIOSTAR Reveals New Features of its Upcoming Motherboards

BIOSTAR is excited to announce the feature set for its 2nd-generation RACING series of motherboards that will support the 7th-generation of processors from Intel. BIOSTAR steps up its game for gamers and enthusiasts by introducing new features and upgrading existing ones for a more powerful experience like never before seen from the RACING series.

BIOSTAR introduces new features to further enhance the degree of performance and style that the BIOSTAR RACING series is known for. Some of these features include the brand-new M.2 Cooling Protection, 10GbE LAN, Lightning Charger and the improved ones for LED lighting, such as VIVID LED Armor and 5050 LED Fun Zone, for DIY lovers and lighting enthusiasts. From excellent power delivery for a more stable operation all the way to new ways of improving the visual appeal and style of your system to make it stand out and pop for your next theme. Combined, these new features will bring a new level of experience for gamers and enthusiasts.
M.2 Cooling Protection
BIOSTAR RACING 2nd-generation motherboards will be the first to have an M.2 heatsink that features ultra-high cooling efficiency to protect M.2 devices connected to the onboard M.2 slot and chipset from thermal issues thus expanding M.2 device lifespan for long-term usage and overall stable operation even under high system load.

Intel 10 GbE LAN
Intel's X550 chipset supports 10GbE LAN, delivering 10 times faster data transfer speed and bandwidth than traditional GbE LAN as well as bringing with it lower power consumption. Together with the latest Intel 7th-generation processors, this will be the fastest motherboards for online game.

Lightning Charger
The new 2nd-generation BIOSTAR RACING motherboards will be the first to have the new Lightning Charger which helps the battery of enabled devices, including smartphones or tablets, to achieve up to 75% charge in just 30 minutes. It supports QC2.0 (12V/1.5A output), Apple Mode (5V /2.4A) and BC 1.2.

VIVID LED Armor
The new VIVID LED Armor enhances the Armor protection for the I/O interfaces and electronic audio components without static electricity. It comes integrated with onboard LED lighting for DIY fun. This feature also enhance performance as it keeps the system stable and protected from dust build-up and static interference.

5050 LED Fun Zone
The brand-new 5050 LED Fun Zone comes with two 5050 LED headers to bring more colorful lighting options to DIY lovers. This improves upon the original feature allowing for a much more flexible way of adding lights to your system.

Digital Power+
Digital Power+ uses IR's digital power controller to bring your PC and processor exceptionally high-performance and ultra-stable operation.

High-Speed U.2 32Gb/s Connector
This unique technology uses PCI Express 3.0 x4 for a maximum of 32Gb/s of bandwidth resulting in transfer rates as fast as 6.5x more than traditional SATA solid-state drives.
Add your own comment

11 Comments on BIOSTAR Reveals New Features of its Upcoming Motherboards

#1
Overclocker_2001
10Gb base-T on a consumer motherboard?
I'm lovin' it :respect:
the only other things needed is.. trash video output, one hdmi and a dp is enough, slap ton of USB
i remember " old days " when high-end mobo have 8+ usb in the back..
and now i'll have to grab pci bracket :shadedshu:
Posted on Reply
#2
CAPSLOCKSTUCK
Spaced Out Lunar Tick
btarunr, post: 3567210, member: 43587"
LED Fun Zone
I've heard it all now..........:banghead:
Posted on Reply
#3
P4-630
CAPSLOCKSTUCK, post: 3567235, member: 129407"
I've heard it all now..........:banghead:
Posted on Reply
#4
RejZoR
I thought LED frenzy has died out like a decade ago. XD
Posted on Reply
#5
CAPSLOCKSTUCK
Spaced Out Lunar Tick
it didnt die out....it was hiding in The Fun Zone


Posted on Reply
#6
kanecvr
In all seriousness I REALLY wished I could buy high-end Biostar products in my country. I've had really a really good experience with their mid-high end products so far, and they are one of the oldest tech companies -> I still have a Voodoo 1 (1996) and a couple of socket 3 motherboards (1993 and 1994) made by them in my collection, and guess what - they work as well as they did the day they were made.

They generally offer more features and better quality then similarly priced boards made by big-brand OEMs.
Posted on Reply
#7
TheGuruStud
kanecvr, post: 3567299, member: 168605"
In all seriousness I REALLY wished I could buy high-end Biostar products in my country. I've had really a really good experience with their mid-high end products so far, and they are one of the oldest tech companies -> I still have a Voodoo 1 (1996) and a couple of socket 3 motherboards (1993 and 1994) made by them in my collection, and guess what - they work as well as they did the day they were made.

They generally offer more features and better quality then similarly priced boards made by big-brand OEMs.
BIOSes were a bit sloppy, but I never had a problem with the hardware itself (they eventually failed but were about 7 yrs old).

You could even crossflash at least one cheap model to the expensive version.

10Gb lan needs to be standard asap. I could use that in an AM4 board for file transfers to the NAS.
Posted on Reply
#8
SimpleTECH
TheGuruStud, post: 3567325, member: 42692"
10Gb lan needs to be standard asap. I could use that in an AM4 board for file transfers to the NAS.
I opted for fiber optic over RJ-45 as I was able to get two NICs + two SFP+ transceivers + 30m of cable for under $75.
Posted on Reply
#9
Darksword
Dear Biostar, Please ditch the LED's and drop the price by $20.00. Thx.
Posted on Reply
#10
AnarchoPrimitiv
SimpleTECH, post: 3567406, member: 134802"
I opted for fiber optic over RJ-45 as I was able to get two NICs + two SFP+ transceivers + 30m of cable for under $75.
When I started planning my 10Gbit home network, I did consider SFP+ as some of the hardware was cheaper, although I really didm't like the idea of needing all those tranceivers (since I have about 8 PC's hooked up), all those optic cables, and I wanted to be able to use CAT7 from beginning to end. Plus I wanted to be able to aggregate the connection for certain machines on the network (like the storage/VM server which has 4x 10Gbit links aggregated as well as two of the Work Stations hooked up to the network as well). Also, for sake of ease and convenience, I wanted to ONLY use Intel manufactured NICs without exception. SO when I made the plunge about 4 months ago, I went with all CAT 7 and NICs with RJ-45 ports. I had to buy about 12x 10Gbit, single port NICs, two 10Gbit switches, a spool of CAT7, etc. The interesting thing is that on the Storage server and the Work Stations I wanted to have aggregated connections on, I HAD to go with 4x Single Port NICs to approach 40Gbits/sec as when you try to instead do it with 2x double port NICs it doesn't work because when you aggregate the two links on a dual port card, they're only capable of 15Gbits total, so combining two would only achieve 30Gbits. Either way, the hardware DID cost more than going with non-RJ45 options, but in the end it saved me a lot of trouble, aggravations and a bunch of transceivers. In case anyone is curious as to why and how I wanted 40Gbit connections with the Storage/VM Server and some work stations, it's because I wanted to have a super fast connection just to have one. Either way, at both ends of the aggregated links (40Gbit) I have NVMe storage sending and receiving so I can actually utilize the bandwidth, and all these NVMe drives will be replaced with the Samsung 960 Pro's I've already pre-ordered as soon as I get them, so I'll be able to get even more out of the bandwidth. Obviously, my storage server has tiered storage with three tiers, the first being 4TB of purely NVMe (data remains here for 7 days, then is automatically passed down to the next tier), the second tier is 8TB of SATAIII SSD storage in RAID 50, then the third and final tier is 120TB of RAID 50 storage on 3.5" 7200 10TB HElium filled SATAIII drives. Is it overkill? Of Course, but I came into some money and really, really wanted to create something that would make any enthusiast drool...I guess that's why I kept rambling on about my network, because I'm very proud of it, haha
Posted on Reply
#11
Robin113
AnarchoPrimitiv, post: 3567592, member: 168101"
When I started planning my 10Gbit home network, I did consider SFP+ as some of the hardware was cheaper, although I really didm't like the idea of needing all those tranceivers (since I have about 8 PC's hooked up), all those optic cables, and I wanted to be able to use CAT7 from beginning to end. Plus I wanted to be able to aggregate the connection for certain machines on the network (like the storage/VM server which has 4x 10Gbit links aggregated as well as two of the Work Stations hooked up to the network as well). Also, for sake of ease and convenience, I wanted to ONLY use Intel manufactured NICs without exception. SO when I made the plunge about 4 months ago, I went with all CAT 7 and NICs with RJ-45 ports. I had to buy about 12x 10Gbit, single port NICs, two 10Gbit switches, a spool of CAT7, etc. The interesting thing is that on the Storage server and the Work Stations I wanted to have aggregated connections on, I HAD to go with 4x Single Port NICs to approach 40Gbits/sec as when you try to instead do it with 2x double port NICs it doesn't work because when you aggregate the two links on a dual port card, they're only capable of 15Gbits total, so combining two would only achieve 30Gbits. Either way, the hardware DID cost more than going with non-RJ45 options, but in the end it saved me a lot of trouble, aggravations and a bunch of transceivers. In case anyone is curious as to why and how I wanted 40Gbit connections with the Storage/VM Server and some work stations, it's because I wanted to have a super fast connection just to have one. Either way, at both ends of the aggregated links (40Gbit) I have NVMe storage sending and receiving so I can actually utilize the bandwidth, and all these NVMe drives will be replaced with the Samsung 960 Pro's I've already pre-ordered as soon as I get them, so I'll be able to get even more out of the bandwidth. Obviously, my storage server has tiered storage with three tiers, the first being 4TB of purely NVMe (data remains here for 7 days, then is automatically passed down to the next tier), the second tier is 8TB of SATAIII SSD storage in RAID 50, then the third and final tier is 120TB of RAID 50 storage on 3.5" 7200 10TB HElium filled SATAIII drives. Is it overkill? Of Course, but I came into some money and really, really wanted to create something that would make any enthusiast drool...I guess that's why I kept rambling on about my network, because I'm very proud of it, haha
You forgot to mention your 2Mbit internet cable.
Posted on Reply
Add your own comment